Coder Social home page Coder Social logo

kcp-go's Introduction

kcp-go

GoDoc Powered MIT licensed Build Status Go Report Card Coverage Statusd Sourcegraph

Introduction

kcp-go is a Production-Grade Reliable-UDP library for golang.

This library intents to provide a smooth, resilient, ordered, error-checked and anonymous delivery of streams over UDP packets, it has been battle-tested with opensource project kcptun. Millions of devices(from low-end MIPS routers to high-end servers) have deployed kcp-go powered program in a variety of forms like online games, live broadcasting, file synchronization and network acceleration.

Lastest Release

Features

  1. Designed for Latency-sensitive scenarios.
  2. Cache friendly and Memory optimized design, offers extremely High Performance core.
  3. Handles >5K concurrent connections on a single commodity server.
  4. Compatible with net.Conn and net.Listener, a drop-in replacement for net.TCPConn.
  5. FEC(Forward Error Correction) Support with Reed-Solomon Codes
  6. Packet level encryption support with AES, TEA, 3DES, Blowfish, Cast5, Salsa20, etc. in CFB mode, which generates completely anonymous packet.
  7. Only A fixed number of goroutines will be created for the entire server application, costs in context switch between goroutines have been taken into consideration.
  8. Compatible with skywind3000's C version with various improvements.
  9. Platform-dependent optimizations: sendmmsg and recvmmsg were expoloited for linux.

Documentation

For complete documentation, see the associated Godoc.

Specification

Frame Format

NONCE:
  16bytes cryptographically secure random number, nonce changes for every packet.
  
CRC32:
  CRC-32 checksum of data using the IEEE polynomial
 
FEC TYPE:
  typeData = 0xF1
  typeParity = 0xF2
  
FEC SEQID:
  monotonically increasing in range: [0, (0xffffffff/shardSize) * shardSize - 1]
  
SIZE:
  The size of KCP frame plus 2
+-----------------+
| SESSION         |
+-----------------+
| KCP(ARQ)        |
+-----------------+
| FEC(OPTIONAL)   |
+-----------------+
| CRYPTO(OPTIONAL)|
+-----------------+
| UDP(PACKET)     |
+-----------------+
| IP              |
+-----------------+
| LINK            |
+-----------------+
| PHY             |
+-----------------+
(LAYER MODEL OF KCP-GO)

Examples

  1. simple examples
  2. kcptun client
  3. kcptun server

Benchmark

===
Model Name:	MacBook Pro
Model Identifier:	MacBookPro14,1
Processor Name:	Intel Core i5
Processor Speed:	3.1 GHz
Number of Processors:	1
Total Number of Cores:	2
L2 Cache (per Core):	256 KB
L3 Cache:	4 MB
Memory:	8 GB
===

$ go test -v -run=^$ -bench .
beginning tests, encryption:salsa20, fec:10/3
goos: darwin
goarch: amd64
pkg: github.com/xtaci/kcp-go
BenchmarkSM4-4                 	   50000	     32180 ns/op	  93.23 MB/s	       0 B/op	       0 allocs/op
BenchmarkAES128-4              	  500000	      3285 ns/op	 913.21 MB/s	       0 B/op	       0 allocs/op
BenchmarkAES192-4              	  300000	      3623 ns/op	 827.85 MB/s	       0 B/op	       0 allocs/op
BenchmarkAES256-4              	  300000	      3874 ns/op	 774.20 MB/s	       0 B/op	       0 allocs/op
BenchmarkTEA-4                 	  100000	     15384 ns/op	 195.00 MB/s	       0 B/op	       0 allocs/op
BenchmarkXOR-4                 	20000000	        89.9 ns/op	33372.00 MB/s	       0 B/op	       0 allocs/op
BenchmarkBlowfish-4            	   50000	     26927 ns/op	 111.41 MB/s	       0 B/op	       0 allocs/op
BenchmarkNone-4                	30000000	        45.7 ns/op	65597.94 MB/s	       0 B/op	       0 allocs/op
BenchmarkCast5-4               	   50000	     34258 ns/op	  87.57 MB/s	       0 B/op	       0 allocs/op
Benchmark3DES-4                	   10000	    117149 ns/op	  25.61 MB/s	       0 B/op	       0 allocs/op
BenchmarkTwofish-4             	   50000	     33538 ns/op	  89.45 MB/s	       0 B/op	       0 allocs/op
BenchmarkXTEA-4                	   30000	     45666 ns/op	  65.69 MB/s	       0 B/op	       0 allocs/op
BenchmarkSalsa20-4             	  500000	      3308 ns/op	 906.76 MB/s	       0 B/op	       0 allocs/op
BenchmarkCRC32-4               	20000000	        65.2 ns/op	15712.43 MB/s
BenchmarkCsprngSystem-4        	 1000000	      1150 ns/op	  13.91 MB/s
BenchmarkCsprngMD5-4           	10000000	       145 ns/op	 110.26 MB/s
BenchmarkCsprngSHA1-4          	10000000	       158 ns/op	 126.54 MB/s
BenchmarkCsprngNonceMD5-4      	10000000	       153 ns/op	 104.22 MB/s
BenchmarkCsprngNonceAES128-4   	100000000	        19.1 ns/op	 837.81 MB/s
BenchmarkFECDecode-4           	 1000000	      1119 ns/op	1339.61 MB/s	    1606 B/op	       2 allocs/op
BenchmarkFECEncode-4           	 2000000	       832 ns/op	1801.83 MB/s	      17 B/op	       0 allocs/op
BenchmarkFlush-4               	 5000000	       272 ns/op	       0 B/op	       0 allocs/op
BenchmarkEchoSpeed4K-4         	    5000	    259617 ns/op	  15.78 MB/s	    5451 B/op	     149 allocs/op
BenchmarkEchoSpeed64K-4        	    1000	   1706084 ns/op	  38.41 MB/s	   56002 B/op	    1604 allocs/op
BenchmarkEchoSpeed512K-4       	     100	  14345505 ns/op	  36.55 MB/s	  482597 B/op	   13045 allocs/op
BenchmarkEchoSpeed1M-4         	      30	  34859104 ns/op	  30.08 MB/s	 1143773 B/op	   27186 allocs/op
BenchmarkSinkSpeed4K-4         	   50000	     31369 ns/op	 130.57 MB/s	    1566 B/op	      30 allocs/op
BenchmarkSinkSpeed64K-4        	    5000	    329065 ns/op	 199.16 MB/s	   21529 B/op	     453 allocs/op
BenchmarkSinkSpeed256K-4       	     500	   2373354 ns/op	 220.91 MB/s	  166332 B/op	    3554 allocs/op
BenchmarkSinkSpeed1M-4         	     300	   5117927 ns/op	 204.88 MB/s	  310378 B/op	    6988 allocs/op
PASS
ok  	github.com/xtaci/kcp-go	50.349s
=== Raspberry Pi 4 ===

➜  kcp-go git:(master) cat /proc/cpuinfo
processor	: 0
model name	: ARMv7 Processor rev 3 (v7l)
BogoMIPS	: 108.00
Features	: half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt vfpd32 lpae evtstrm crc32
CPU implementer	: 0x41
CPU architecture: 7
CPU variant	: 0x0
CPU part	: 0xd08
CPU revision	: 3

➜  kcp-go git:(master)  go test -run=^$ -bench .
2020/01/05 19:25:13 beginning tests, encryption:salsa20, fec:10/3
goos: linux
goarch: arm
pkg: github.com/xtaci/kcp-go/v5
BenchmarkSM4-4                     20000             86475 ns/op          34.69 MB/s           0 B/op          0 allocs/op
BenchmarkAES128-4                  20000             62254 ns/op          48.19 MB/s           0 B/op          0 allocs/op
BenchmarkAES192-4                  20000             71802 ns/op          41.78 MB/s           0 B/op          0 allocs/op
BenchmarkAES256-4                  20000             80570 ns/op          37.23 MB/s           0 B/op          0 allocs/op
BenchmarkTEA-4                     50000             37343 ns/op          80.34 MB/s           0 B/op          0 allocs/op
BenchmarkXOR-4                    100000             22266 ns/op         134.73 MB/s           0 B/op          0 allocs/op
BenchmarkBlowfish-4                20000             66123 ns/op          45.37 MB/s           0 B/op          0 allocs/op
BenchmarkNone-4                  3000000               518 ns/op        5786.77 MB/s           0 B/op          0 allocs/op
BenchmarkCast5-4                   20000             76705 ns/op          39.11 MB/s           0 B/op          0 allocs/op
Benchmark3DES-4                     5000            418868 ns/op           7.16 MB/s           0 B/op          0 allocs/op
BenchmarkTwofish-4                  5000            326896 ns/op           9.18 MB/s           0 B/op          0 allocs/op
BenchmarkXTEA-4                    10000            114418 ns/op          26.22 MB/s           0 B/op          0 allocs/op
BenchmarkSalsa20-4                 50000             36736 ns/op          81.66 MB/s           0 B/op          0 allocs/op
BenchmarkCRC32-4                 1000000              1735 ns/op         589.98 MB/s
BenchmarkCsprngSystem-4          1000000              2179 ns/op           7.34 MB/s
BenchmarkCsprngMD5-4             2000000               811 ns/op          19.71 MB/s
BenchmarkCsprngSHA1-4            2000000               862 ns/op          23.19 MB/s
BenchmarkCsprngNonceMD5-4        2000000               878 ns/op          18.22 MB/s
BenchmarkCsprngNonceAES128-4     5000000               326 ns/op          48.97 MB/s
BenchmarkFECDecode-4              200000              9081 ns/op         165.16 MB/s         140 B/op          1 allocs/op
BenchmarkFECEncode-4              100000             12039 ns/op         124.59 MB/s          11 B/op          0 allocs/op
BenchmarkFlush-4                  100000             21704 ns/op               0 B/op          0 allocs/op
BenchmarkEchoSpeed4K-4              2000            981182 ns/op           4.17 MB/s       12384 B/op        424 allocs/op
BenchmarkEchoSpeed64K-4              100          10503324 ns/op           6.24 MB/s      123616 B/op       3779 allocs/op
BenchmarkEchoSpeed512K-4              20         138633802 ns/op           3.78 MB/s     1606584 B/op      29233 allocs/op
BenchmarkEchoSpeed1M-4                 5         372903568 ns/op           2.81 MB/s     4080504 B/op      63600 allocs/op
BenchmarkSinkSpeed4K-4             10000            121239 ns/op          33.78 MB/s        4647 B/op        104 allocs/op
BenchmarkSinkSpeed64K-4             1000           1587906 ns/op          41.27 MB/s       50914 B/op       1115 allocs/op
BenchmarkSinkSpeed256K-4             100          16277830 ns/op          32.21 MB/s      453027 B/op       9296 allocs/op
BenchmarkSinkSpeed1M-4               100          31040703 ns/op          33.78 MB/s      898097 B/op      18932 allocs/op
PASS
ok      github.com/xtaci/kcp-go/v5      64.151s

Typical Flame Graph

Flame Graph in kcptun

Key Design Considerations

  1. slice vs. container/list

kcp.flush() loops through the send queue for retransmission checking for every 20ms(interval).

I've wrote a benchmark for comparing sequential loop through slice and container/list here:

https://github.com/xtaci/notes/blob/master/golang/benchmark2/cachemiss_test.go

BenchmarkLoopSlice-4   	2000000000	         0.39 ns/op
BenchmarkLoopList-4    	100000000	        54.6 ns/op

List structure introduces heavy cache misses compared to slice which owns better locality, 5000 connections with 32 window size and 20ms interval will cost 6us/0.03%(cpu) using slice, and 8.7ms/43.5%(cpu) for list for each kcp.flush().

  1. Timing accuracy vs. syscall clock_gettime

Timing is critical to RTT estimator, inaccurate timing leads to false retransmissions in KCP, but calling time.Now() costs 42 cycles(10.5ns on 4GHz CPU, 15.6ns on my MacBook Pro 2.7GHz).

The benchmark for time.Now() lies here:

https://github.com/xtaci/notes/blob/master/golang/benchmark2/syscall_test.go

BenchmarkNow-4         	100000000	        15.6 ns/op

In kcp-go, after each kcp.output() function call, current clock time will be updated upon return, and for a single kcp.flush() operation, current time will be queried from system once. For most of the time, 5000 connections costs 5000 * 15.6ns = 78us(a fixed cost while no packet needs to be sent), as for 10MB/s data transfering with 1400 MTU, kcp.output() will be called around 7500 times and costs 117us for time.Now() in every second.

  1. Memory management

Primary memory allocation are done from a global buffer pool xmit.Buf, in kcp-go, when we need to allocate some bytes, we can get from that pool, and a fixed-capacity 1500 bytes(mtuLimit) will be returned, the rx queue, tx queue and fec queue all receive bytes from there, and they will return the bytes to the pool after using to prevent unnecessary zer0ing of bytes. The pool mechanism maintained a high watermark for slice objects, these in-flight objects from the pool will survive from the perodical garbage collection, meanwhile the pool kept the ability to return the memory to runtime if in idle.

  1. Information security

kcp-go is shipped with builtin packet encryption powered by various block encryption algorithms and works in Cipher Feedback Mode, for each packet to be sent, the encryption process will start from encrypting a nonce from the system entropy, so encryption to same plaintexts never leads to a same ciphertexts thereafter.

The contents of the packets are completely anonymous with encryption, including the headers(FEC,KCP), checksums and contents. Note that, no matter which encryption method you choose on you upper layer, if you disable encryption, the transmit will be insecure somehow, since the header is PLAINTEXT to everyone it would be susceptible to header tampering, such as jamming the sliding window size, round-trip time, FEC property and checksums. AES-128 is suggested for minimal encryption since modern CPUs are shipped with AES-NI instructions and performs even better than salsa20(check the table above).

Other possible attacks to kcp-go includes: a) traffic analysis, dataflow on specific websites may have pattern while interchanging data, but this type of eavesdropping has been mitigated by adapting smux to mix data streams so as to introduce noises, perfect solution to this has not appeared yet, theroretically by shuffling/mixing messages on larger scale network may mitigate this problem. b) replay attack, since the asymmetrical encryption has not been introduced into kcp-go for some reason, capturing the packets and replay them on a different machine is possible, (notice: hijacking the session and decrypting the contents is still impossible), so upper layers should contain a asymmetrical encryption system to guarantee the authenticity of each message(to process message exactly once), such as HTTPS/OpenSSL/LibreSSL, only by signing the requests with private keys can eliminate this type of attack.

Connection Termination

Control messages like SYN/FIN/RST in TCP are not defined in KCP, you need some keepalive/heartbeat mechanism in the application-level. A real world example is to use some multiplexing protocol over session, such as smux(with embedded keepalive mechanism), see kcptun for example.

FAQ

Q: I'm handling >5K connections on my server, the CPU utilization is so high.

A: A standalone agent or gate server for running kcp-go is suggested, not only for CPU utilization, but also important to the precision of RTT measurements(timing) which indirectly affects retransmission. By increasing update interval with SetNoDelay like conn.SetNoDelay(1, 40, 1, 1) will dramatically reduce system load, but lower the performance.

Q: When should I enable FEC?

A: Forward error correction is critical to long-distance transmission, because a packet loss will lead to a huge penalty in time. And for the complicated packet routing network in modern world, round-trip time based loss check will not always be efficient, the big deviation of RTT samples in the long way usually leads to a larger RTO value in typical rtt estimator, which in other words, slows down the transmission.

Q: Should I enable encryption?

A: Yes, for the safety of protocol, even if the upper layer has encrypted.

Who is using this?

  1. https://github.com/xtaci/kcptun -- A Secure Tunnel Based On KCP over UDP.
  2. https://github.com/getlantern/lantern -- Lantern delivers fast access to the open Internet.
  3. https://github.com/smallnest/rpcx -- A RPC service framework based on net/rpc like alibaba Dubbo and weibo Motan.
  4. https://github.com/gonet2/agent -- A gateway for games with stream multiplexing.
  5. https://github.com/syncthing/syncthing -- Open Source Continuous File Synchronization.

Links

  1. https://github.com/xtaci/smux/ -- A Stream Multiplexing Library for golang with least memory
  2. https://github.com/xtaci/libkcp -- FEC enhanced KCP session library for iOS/Android in C++
  3. https://github.com/skywind3000/kcp -- A Fast and Reliable ARQ Protocol
  4. https://github.com/klauspost/reedsolomon -- Reed-Solomon Erasure Coding in Go

kcp-go's People

Contributors

audriusbutkevicius avatar autoexpect avatar cristaloleg avatar dependabot[bot] avatar eudi4h avatar fulirockx avatar genisysram avatar haraldnordgren avatar horjulf avatar jinq0123 avatar lonng avatar rapiz1 avatar templexxx avatar xjdrew avatar xtaci avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kcp-go's Issues

关于数据包的处理顺序

是否可以把encrypt放在kcp之前,这样重发的包就不用再次进行加密,我想这样应该可以节省一些cpu

Unexpected behavior

package main

import(
    "fmt"
    "time"
    "net"
    "github.com/AudriusButkevicius/kcp-go"
)

func dial(addr net.Addr) {
    c, err := kcp.Dial(addr.String())
    if err != nil {
        panic(err)
    }
    n, err := c.Write([]byte("hello"))
    if err != nil {
        panic(err)
    }
    b := make([]byte, 10)
    n, err = c.Read(b)
    fmt.Println("DIAL", string(b[:n]), err)
    c.Close()
}

func main() {
    l, err := kcp.Listen("127.0.0.1:0")
    fmt.Println(l.Addr().String())
    caddr := l.Addr()
    if err != nil {
        panic(err)
    }

    b := make([]byte, 10)

    go func() {
        time.Sleep(time.Second)
        dial(caddr)
    }()

    go func() {
        time.Sleep(time.Second*5)
        panic("timeout")
    }()

    for {
        c, err := l.Accept()
        if err != nil {
            panic(err)
        }
        c.Write([]byte("hi"))
        n, err := c.Read(b)
        fmt.Println("LISTEN", string(b[:n]), err)
        c.Close()
    }
}

I'd expect the output to be:

127.0.0.1:60678
LISTEN hello <nil>
DIAL hi <nil>
panic: timeout

Actual output:

127.0.0.1:60678
LISTEN hello <nil>
LISTEN hello <nil>
LISTEN hello <nil>
LISTEN hello <nil>
LISTEN hello <nil>
LISTEN hello <nil>
panic: timeout

I guess it always assumes a new conversation, given .Close() is called on the accept side?

数据有可能错掉

类似kcptun,在kcp-go上再套上yamux,两台服务器之间传输数据,服务端用一个简单的echoserver,写了一个测试程序,每隔50ms发送1024个字节的数据,然后计算RTT延迟,延迟是比较小(0-15ms),但是运行到一定的数量,yamux会输出invalid protocol version错误,然后连接断掉,使用还没加SetACKNodelay的版本,就不会这样但是延迟会稍微高些(50ms左右,具体哪个版本现在无法找回了),测试程序是这个:
https://github.com/zx9597446/echoping
现在不能确定是kcp-go的问题,还是yamux的问题,怀疑是kcp-go发送的数据会有一定几率错掉?

Simple Messaging Example

Can someone please write-up a simple client/server messaging example? I'm from the NodeJS world and want to test several RUDP implementations to check latency. Can someone please provide me with something that follows a simple "easy-to-grasp" UDP client/server example like below?

udp_client.go

package main
 
import (
    "fmt"
    "net"
    "time"
    "strconv"
)
 
func CheckError(err error) {
    if err  != nil {
        fmt.Println("Error: " , err)
    }
}
 
func main() {
    ServerAddr,err := net.ResolveUDPAddr("udp","127.0.0.1:10001")
    CheckError(err)
 
    LocalAddr, err := net.ResolveUDPAddr("udp", "127.0.0.1:0")
    CheckError(err)
 
    Conn, err := net.DialUDP("udp", LocalAddr, ServerAddr)
    CheckError(err)
 
    defer Conn.Close()
    i := 0
    for {
        msg := strconv.Itoa(i)
        i++
        buf := []byte(msg)
        _,err := Conn.Write(buf)
        if err != nil {
            fmt.Println(msg, err)
        }
        time.Sleep(time.Second * 1)
    }
}

三次确认关闭协议?

目前kcp-go协议相对于tcp协议缺少一个关闭协议,
当通讯一方A关闭了连接,通讯另一方B不知道A关闭了连接,B的连接会一直挂起到超时.
这个问题导致kcp协议难以作为代理协议取代 ss.

在哪里下载6-29日前的版本?

现在用kcp-csharp的客户端连接kcp-go的服务端有问题,
注意到kcp-go是在6-29提交的新文件,而kcp-csharp的最后更新是在5月份。
不知在哪里能下载5月之前的kcp-go版本,谢谢

test failure with go 1.9

I'm not sure what the cause is here, but the tests fail with go 1.9.

The output from the failing test is:

+ go test -compiler gc -ldflags '' github.com/xtaci/kcp-go
beginning tests, encryption:salsa20, fec:10/3
panic: reflect: call of reflect.Value.Int on zero Value
goroutine 35 [running]:
reflect.Value.Int(0x0, 0x0, 0x0, 0x820250)
	/usr/lib/golang/src/reflect/value.go:908 +0x141
golang.org/x/net/internal/netreflect.socketOf(0xa162e0, 0xc4201ae000, 0x0, 0xc420033748, 0x743fce)
	/usr/share/gocode/src/golang.org/x/net/internal/netreflect/socket_posix.go:26 +0x196
golang.org/x/net/internal/netreflect.SocketOf(0xa162e0, 0xc4201ae000, 0x80e740, 0x1, 0xa162e0)
	/usr/share/gocode/src/golang.org/x/net/internal/netreflect/socket.go:23 +0x65
golang.org/x/net/ipv4.(*genericOpt).SetTOS(0xc420033778, 0xb8, 0xc4201ae000, 0xa162e0)
	/usr/share/gocode/src/golang.org/x/net/ipv4/genericopt_posix.go:33 +0x51
github.com/xtaci/kcp-go.(*Listener).SetDSCP(0xc420186080, 0x2e, 0x0, 0x0)
	/builddir/build/BUILDROOT/golang-github-xtaci-kcp-go-3.17-2.fc27.x86_64/usr/share/gocode/src/github.com/xtaci/kcp-go/sess.go:785 +0x8a
github.com/xtaci/kcp-go.sinkServer.func1(0xa130c0, 0xc420186080)
	/builddir/build/BUILDROOT/golang-github-xtaci-kcp-go-3.17-2.fc27.x86_64/usr/share/gocode/src/github.com/xtaci/kcp-go/sess_test.go:154 +0x7b
created by github.com/xtaci/kcp-go.sinkServer
	/builddir/build/BUILDROOT/golang-github-xtaci-kcp-go-3.17-2.fc27.x86_64/usr/share/gocode/src/github.com/xtaci/kcp-go/sess_test.go:150 +0x5c
FAIL	github.com/xtaci/kcp-go	0.018s

Reconnect from same IP fails if there's only one active client.

lastAddr should be cleared when a session is closed.

index fac3122..8725f40 100644
--- a/sess.go
+++ b/sess.go
@@ -754,6 +754,7 @@ func (l *Listener) monitor() {
 
                        xmitBuf.Put(raw)
                case deadlink := <-l.chSessionClosed:
+                       lastAddr = ""
                        delete(l.sessions, deadlink.String())
                case <-l.die:
                        return

性能问题

用c#的客户端 模拟了10个连接 每秒30个包发送的频率发送 服务器cpu很高啊 比tcp的服务器高出很多倍

完全自定义的加密方式

目前 UDPSession 在构造的时候接受一个 cipher.Block 参数,然后用它对 UDP 数据加密。能否把这个参数扩展为一个自定义的类型,比如

type Payload []byte

type Cryptor interface {
  Encrypt(Payload, Payload)
  Decrypt(Payload, Payload)

这样的话,kcp-go 的核心代码可以不必依赖于某种特定的加密方式。

Bind local port for remote connection

Hi, it would be nice to see an option to bind local port in a client for a remote connection in DialWithOptions function.
Binding outgoing UDP socket for explicit port number is helpful for NAT traversal.
Please add an argument for DialWithOptions function and pass it to DialUDP second argument.

UDP to KCP Connection Upgrade

If I have already established a net.UDPConn connection (like a UDP rendezvous for example), and want to upgrade that to a KCP connection, is there a way to do that with the library as-is, or would it require a new function?

Also, noticed that the Set(Read/Write)Deadline functions for Listener and UDPSession don't update the deadlines for the underlying connection (so if the inherited connection already had a deadline set, the Listener/UDPSession calls will not reset it).

What am I missing here?

Thanks!

kcp session可能会存在dup的问题么

func NewConn(raddr string, block BlockCrypt, dataShards, parityShards int, conn net.PacketConn) (*UDPSession, error) {
	udpaddr, err := net.ResolveUDPAddr("udp", raddr)
	if err != nil {
		return nil, errors.Wrap(err, "net.ResolveUDPAddr")
	}

	var convid uint32
	binary.Read(rand.Reader, binary.LittleEndian, &convid)
	return newUDPSession(convid, dataShards, parityShards, nil, conn, udpaddr, block), nil
}

看到convid是通过Dial方的随机数初始化的,然后使用addr+conv的二元组作sessionKey,那么在相同客户端对同一个服务器Dial多次的情况下,从概率上还是存在后面的session覆盖先前session的情况吧.请问作者在实际项目中是如何避免这一点的呢?

能否实现一个带缓存池的kcp版本?

在路由器里用 golang 都面临着内存问题. Golang 使用内存涨得很快,能否在 NewSegment 那个地方,实现一个内存池,类似使用 sync.Pool 来实现.但是就需要手动释放内存了!

是否需要检测 buf 长度

kcp.go:180 seg.data = xmitBuf.Get().([]byte)[:size]

这儿是怎么确保 xmitBuf.Get().([]byte) 长度 >= size 来生成新的 slice 吗?

我碰到几次在这行 crash,不过后来查了下是输入了异常的数据包导致的,不知道算不算是个bug。

kcp本地测试也会出现重传包

用kcp-go写了client和server端,都在本地运行,client从/dev/zero读取数据发送给server端,server端接收数据后丢弃,但是更据snmp的统计数据,client端出现重传,好奇为啥会出现重传包,在本地应该不至于ack会丢失啊。kcp参数设置关闭了拥塞控制,关闭了快速重传。

typo

Conventions part in README.md
s/embeded/embedded/

为了能够做UDP穿透,我修改了sess.go文件,但测试无法通过。

不知道为什么我的整个 sess2_test.go 文件测试到了 TestBigPacket 这里,总是写入3次就没反应了。单独执行如下命令就能执行完毕:
go test -run BigPacket

请问是什么原因会导致这个测试进行不下去了呢?附件是基于 commit 514496a 修改的。最后一项测试我还屏蔽了。
我的目的是想用KCP来做穿透,所以在Listener下增加了 Dial 和 DialWithOptions 方法。
如果可以的话,你能否在你接下来的更新中给出更好的方法(做UDP穿透用);或者你现有的代码就能实现,而是我用的不对呢。
我的go版本为“go version go1.7rc3 windows/386”在“LiteIDE X30.2”上进行测试的,LiteIDE 中的环境变量设置如下:

# native compiler windows 386

GOROOT=c:\go
#GOBIN=
GOARCH=386
GOOS=windows
CGO_ENABLED=1

GO15VENDOREXPERIMENT=1

PATH=c:\mingw32\bin;%GOROOT%\bin;%PATH%

LITEIDE_GDB=gdb
LITEIDE_MAKE=mingw32-make
LITEIDE_TERM=%COMSPEC%
LITEIDE_TERMARGS=
LITEIDE_EXEC=%COMSPEC%
LITEIDE_EXECOPT=/C

仅保留了 sess2_test.go 测试,文件以下是测试结果:

=== RUN   TestTimeout
2016/08/11 19:59:11 listening on: 127.0.0.1:9999
2016/08/11 19:59:11 listening on: 127.0.0.1:6666
2016/08/11 19:59:11 l.sessions[127.0.0.1:9999] 不存在: local:127.0.0.1:6666; remote:127.0.0.1:9999
--- PASS: TestTimeout (2.07s)
=== RUN   TestClose
--- PASS: TestClose (0.02s)
=== RUN   TestSendRecv
sent: hello0
new client 127.0.0.1:6666
recv: hello0
sent: hello1
recv: hello1
sent: hello2
2016/08/11 19:59:13 l.sessions[127.0.0.1:9999] 不存在: local:127.0.0.1:6666; remote:127.0.0.1:9999
2016/08/11 19:59:13 l.sessions[127.0.0.1:9999] 不存在: local:127.0.0.1:6666; remote:127.0.0.1:9999
recv: hello2
sent: hello3
recv: hello3
sent: hello4
recv: hello4
sent: hello5
recv: hello5
sent: hello6
recv: hello6
sent: hello7
recv: hello7
sent: hello8
recv: hello8
sent: hello9
recv: hello9
sent: hello10
recv: hello10
sent: hello11
recv: hello11
sent: hello12
recv: hello12
sent: hello13
recv: hello13
sent: hello14
recv: hello14
sent: hello15
recv: hello15
sent: hello16
recv: hello16
sent: hello17
recv: hello17
sent: hello18
recv: hello18
sent: hello19
recv: hello19
sent: hello20
recv: hello20
sent: hello21
recv: hello21
sent: hello22
recv: hello22
sent: hello23
recv: hello23
sent: hello24
recv: hello24
sent: hello25
recv: hello25
sent: hello26
recv: hello26
sent: hello27
recv: hello27
sent: hello28
recv: hello28
sent: hello29
recv: hello29
sent: hello30
recv: hello30
sent: hello31
recv: hello31
sent: hello32
recv: hello32
sent: hello33
recv: hello33
sent: hello34
recv: hello34
sent: hello35
recv: hello35
sent: hello36
recv: hello36
sent: hello37
recv: hello37
sent: hello38
recv: hello38
sent: hello39
recv: hello39
sent: hello40
recv: hello40
sent: hello41
recv: hello41
sent: hello42
recv: hello42
sent: hello43
recv: hello43
sent: hello44
recv: hello44
sent: hello45
recv: hello45
sent: hello46
recv: hello46
sent: hello47
recv: hello47
sent: hello48
recv: hello48
sent: hello49
recv: hello49
sent: hello50
recv: hello50
sent: hello51
recv: hello51
sent: hello52
recv: hello52
sent: hello53
recv: hello53
sent: hello54
recv: hello54
sent: hello55
recv: hello55
sent: hello56
recv: hello56
sent: hello57
recv: hello57
sent: hello58
recv: hello58
sent: hello59
recv: hello59
sent: hello60
recv: hello60
sent: hello61
recv: hello61
sent: hello62
recv: hello62
sent: hello63
recv: hello63
sent: hello64
recv: hello64
sent: hello65
recv: hello65
sent: hello66
recv: hello66
sent: hello67
recv: hello67
sent: hello68
recv: hello68
sent: hello69
recv: hello69
sent: hello70
recv: hello70
sent: hello71
recv: hello71
sent: hello72
recv: hello72
sent: hello73
recv: hello73
sent: hello74
recv: hello74
sent: hello75
recv: hello75
sent: hello76
recv: hello76
sent: hello77
recv: hello77
sent: hello78
recv: hello78
sent: hello79
recv: hello79
sent: hello80
recv: hello80
sent: hello81
recv: hello81
sent: hello82
recv: hello82
sent: hello83
recv: hello83
sent: hello84
recv: hello84
sent: hello85
recv: hello85
sent: hello86
recv: hello86
sent: hello87
recv: hello87
sent: hello88
recv: hello88
sent: hello89
recv: hello89
sent: hello90
recv: hello90
sent: hello91
recv: hello91
sent: hello92
recv: hello92
sent: hello93
recv: hello93
sent: hello94
recv: hello94
sent: hello95
recv: hello95
sent: hello96
recv: hello96
sent: hello97
recv: hello97
sent: hello98
recv: hello98
sent: hello99
recv: hello99
--- PASS: TestSendRecv (0.63s)
=== RUN   TestBigPacket
TestBigPacket 第1次写入...写入 524288 个字节
TestBigPacket 第2次写入...写入 524288 个字节
TestBigPacket 第3次写入...2016/08/11 19:59:14 l.sessions[127.0.0.1:9999] 不存在: local:127.0.0.1:6666; remote:127.0.0.1:9999
2016/08/11 19:59:16 key:127.0.0.1:9999: true


exit status 1
FAIL    github.com/xtaci/kcp-go    10.425s
错误: 进程退出代码 1.

我才学golang,有很多描述的不准确或者不明白,谢谢你能指出,也谢谢你在百忙之中能够阅读该邮件。

sess.zip

cpufeat这个库会用不支持的指令

github.com/templexxx/cpufeat

src/github.com/templexxx/cpufeat/cpu_x86.s:28: unrecognized instruction "XGETBV"
asm: asm: assembly of src/github.com/templexxx/cpufeat/cpu_x86.s failed

Data Races

I was just trying out kcp-go and detected some data races.

==================
WARNING: DATA RACE
Read by goroutine 170:
  math/rand.(*rngSource).Int63()
      /usr/local/go/src/math/rand/rng.go:233 +0x32
  math/rand.(*Rand).Int63()
      /usr/local/go/src/math/rand/rand.go:46 +0x54
  math/rand.(*Rand).Int31()
      /usr/local/go/src/math/rand/rand.go:52 +0x2e
  math/rand.(*Rand).Int31n()
      /usr/local/go/src/math/rand/rand.go:87 +0xd3
  math/rand.(*Rand).Intn()
      /usr/local/go/src/math/rand/rand.go:101 +0x9f
  github.com/xtaci/kcp-go.(*UDPSession).outputTask()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/xtaci/kcp-go/sess.go:455 +0x2453

Previous write by goroutine 144:
  math/rand.(*rngSource).Int63()
      /usr/local/go/src/math/rand/rng.go:233 +0x48
  math/rand.(*Rand).Int63()
      /usr/local/go/src/math/rand/rand.go:46 +0x54
  math/rand.(*Rand).Int31()
      /usr/local/go/src/math/rand/rand.go:52 +0x2e
  math/rand.(*Rand).Int31n()
      /usr/local/go/src/math/rand/rand.go:87 +0xd3
  math/rand.(*Rand).Intn()
      /usr/local/go/src/math/rand/rand.go:101 +0x9f
  github.com/xtaci/kcp-go.(*UDPSession).outputTask()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/xtaci/kcp-go/sess.go:455 +0x2453

Goroutine 170 (running) created at:
  github.com/xtaci/kcp-go.newUDPSession()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/xtaci/kcp-go/sess.go:104 +0x819
  github.com/xtaci/kcp-go.DialWithOptions()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/xtaci/kcp-go/sess.go:827 +0x19b
  github.com/getlantern/proxybench.doLocalProxy()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/getlantern/proxybench/proxybench.go:194 +0x11a

Goroutine 144 (running) created at:
  github.com/xtaci/kcp-go.newUDPSession()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/xtaci/kcp-go/sess.go:104 +0x819
  github.com/xtaci/kcp-go.DialWithOptions()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/xtaci/kcp-go/sess.go:827 +0x19b
  github.com/getlantern/proxybench.doLocalProxy()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/getlantern/proxybench/proxybench.go:194 +0x11a
==================
==================
WARNING: DATA RACE
Read by goroutine 170:
  math/rand.(*rngSource).Int63()
      /usr/local/go/src/math/rand/rng.go:238 +0xb0
  math/rand.(*Rand).Int63()
      /usr/local/go/src/math/rand/rand.go:46 +0x54
  math/rand.(*Rand).Int31()
      /usr/local/go/src/math/rand/rand.go:52 +0x2e
  math/rand.(*Rand).Int31n()
      /usr/local/go/src/math/rand/rand.go:87 +0xd3
  math/rand.(*Rand).Intn()
      /usr/local/go/src/math/rand/rand.go:101 +0x9f
  github.com/xtaci/kcp-go.(*UDPSession).outputTask()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/xtaci/kcp-go/sess.go:455 +0x2453

Previous write by goroutine 144:
  math/rand.(*rngSource).Int63()
      /usr/local/go/src/math/rand/rng.go:238 +0xcc
  math/rand.(*Rand).Int63()
      /usr/local/go/src/math/rand/rand.go:46 +0x54
  math/rand.(*Rand).Int31()
      /usr/local/go/src/math/rand/rand.go:52 +0x2e
  math/rand.(*Rand).Int31n()
      /usr/local/go/src/math/rand/rand.go:87 +0xd3
  math/rand.(*Rand).Intn()
      /usr/local/go/src/math/rand/rand.go:101 +0x9f
  github.com/xtaci/kcp-go.(*UDPSession).outputTask()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/xtaci/kcp-go/sess.go:455 +0x2453

Goroutine 170 (running) created at:
  github.com/xtaci/kcp-go.newUDPSession()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/xtaci/kcp-go/sess.go:104 +0x819
  github.com/xtaci/kcp-go.DialWithOptions()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/xtaci/kcp-go/sess.go:827 +0x19b
  github.com/getlantern/proxybench.doLocalProxy()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/getlantern/proxybench/proxybench.go:194 +0x11a

Goroutine 144 (running) created at:
  github.com/xtaci/kcp-go.newUDPSession()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/xtaci/kcp-go/sess.go:104 +0x819
  github.com/xtaci/kcp-go.DialWithOptions()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/xtaci/kcp-go/sess.go:827 +0x19b
  github.com/getlantern/proxybench.doLocalProxy()
      /Users/ox.to.a.cart/lantern-pro/src/github.com/getlantern/proxybench/proxybench.go:194 +0x11a
==================

Per-Session SNMP Stats?

Is this something that would add value for you? If so, I'm undertaking it for my own project. Can do a pull request if you're interested.

Thx.

JS

Need a assistance with Read... (memory leak)

for now I'm using like that:

var (
	buf = make([]byte, 512)
)
pLength, err := c.KCPConn.Read(buf)

if pLength != 0 {
	var b []byte = make([]byte, pLength)
	copy(b, buf[:pLength])
	c.PacketChan <- b
}

I wanna avoid copy function. And with (30 packets / sec) memory growing up fast with only one call
c.KCPConn.Read(buf)
or this is normal behavior and way to catch packets?

also I tried byte[] pool, same issue with memory

Setting the UDP source port (adding laddr parameter to DialWithOptions in sess.go?)

Hi,

In order to do UDP hole punching when using kcptun's client, I'd like to be able to set the UDP source port. This does not seem possible with the current implementation in sess.go. I first demonstrate that it is easy to hack, but would like to ask what the appropriate API should be, if this were to be considered for inclusion in kcp-go.

Setting the UDP source port would require the current call in DialWithOptions() in sess.go:

udpconn, err := net.DialUDP("udp", nil, udpaddr)

to provide a laddr parameter to net.DialUDP instead of nil. This hack makes kcptun's client_linux_amd64 use source port 20000:

> git diff
diff --git a/sess.go b/sess.go
index fac3122..418c374 100644
--- a/sess.go
+++ b/sess.go
@@ -915,7 +915,12 @@ func DialWithOptions(raddr string, block BlockCrypt, dataShards, parityShards in
                return nil, errors.Wrap(err, "net.ResolveUDPAddr")
        }
 
-       udpconn, err := net.DialUDP("udp", nil, udpaddr)
+       laddr, err := net.ResolveUDPAddr("udp", ":20000")
+       if err != nil {
+               return nil, errors.Wrap(err, "net.ResolveUDPAddr laddr")
+       }
+
+       udpconn, err := net.DialUDP("udp", laddr, udpaddr)
        if err != nil {
                return nil, errors.Wrap(err, "net.DialUDP")
        }

However, current the function signature of func DialWithOptions doesn't really make it possible/easy to add a laddr parameter.

// DialWithOptions connects to the remote address "raddr" on the network "udp" with packet encryption
func DialWithOptions(raddr string, block BlockCrypt, dataShards, parityShards int) (*UDPSession, error) {
        ...
}

I don't know of any alternative to or workaround for setting the UDP source port in order to do UDP hole punching and it seems like such a cool use of the KCP family of libraries so I really hope this possibility can be included. To that end I would like to ask @xtaci:

  1. Would you be open to adding the possibility of controlling the source port by providing/setting laddr?
  2. What API changes or addtions would you prefer for this?
    1. Adding a laddr parameter to func DialWithOptions?
      • Pro: This is the cleanest long-term solution
      • Con: That would break all existing callers to kcp-go's DialWithOptions
    2. My favorite: Create a type DialParams struct containing the raddr, block, dataShards, parityShards plus a new laddr element, and then create a new func DialWithParams that takes such a DialParams struct as a single parameter. Mark DialWithOptions as deprecated and implement it as a wrapper around DialWithParams.
      • Pro: This lends itself nicely to expansion in the future and it maintains backwards compatibility with DialWithOptions.
      • Con: Now, confusingly, there would be both a func DialWithParams and a deprecated func DialWithOptions.
    3. Create a new func DialWithOptionsAndLaddr that takes all the parameters of DialWithOptions + laddr? DialWithOptions would then be a wrapper around DialWithOptionsAndLaddr using a nil laddr. Choosing a good name for this func is tricky (it could also be called DialWithMoreOptions) - I guess because it ideally should be called DialWithOptions :-)
      • Pro: This follows the pattern of DialWithOptions and it maintains backwards compatibility with DialWithOptions.
      • Con: Will create a huge mess when someNewParam needs to be added in the future
    4. Some other approach you'd prefer?

License for image files

Hi there,

Thanks for working on this project.
I'm going to package this to Debian. However, according to rule of Debian, every file should be cleared in license.
So I'm writing to you to double confirm the following file:
donate.png frame.png kcp-go.png shannon.jpg

Are you created the above 4 files by yourself? If so, we can safely treat it same as the code you wrote, so it's MIT/Expat license.
But if you borrow some files from other project, please let me know. So I can trace the license of each image file.
Thank you!

Cheers,
Roger

how to handle timeout

Hi, can you assist me with little issue?
For example, I create connection and setup KeepAlive to 5 seconds:
KCPConn.SetKeepAlive(5)

After this I am put down client and wait some time. The next code gives following results:

	log.Print("send1\n")
	l, err := c.KCPConn.Write(p.Serialize())
	log.Print("send2\n")

2016/12/30 14:21:46 send1
2016/12/30 14:21:46 send2
...
2016/12/30 14:21:57 send1

Connection has lived more than 5 sec and I don't know how to process this error.

Apologize for my bad English.
Regards,
Max.

fec问题

关于fec decode, 感觉跑了一段时间会有问题
假设一开始seqid是一直增长的, data_shards是2, parity_shards是1,

  1. 假设先收到0, 1, 然后解析成功, 0, 1从queue里面去掉了
  2. 收到2, 那么2就会留在queue里面
  3. ... 假如1,2两步一直重复, 那么queue就会被填满
  4. seqid一直增, 直到超过最大值,然后从0重新开始, 从这个时间开始, 因为queue满了,
    而新的seqid一直小于queue里面的数值, 这样最小的seqid就会被淘汰, 那样的话,从这个点开始,fec就相当于一直不工作了.

test failures: too many open files (in fedora build roots)

I'm trying to package this project for fedora, but I'm hitting a "too many open files" error during the mandatory execution of the test suite in the build root:

+ go test -compiler gc -ldflags '' github.com/xtaci/kcp-go
beginning tests, encryption:salsa20, fec:10/3
default mode result (4800ms):
avgrtt=1711 maxrtt=2779
normal mode result (2123ms):
avgrtt=131 maxrtt=281
fast mode result (2104ms):
avgrtt=133 maxrtt=264
panic: net.DialUDP: dial udp 127.0.0.1:9999: socket: too many open files
goroutine 1227 [running]:
github.com/xtaci/kcp-go.dialEcho(0x0, 0x0, 0x0)
	/builddir/build/BUILDROOT/golang-github-xtaci-kcp-go-3.15-1.fc27.x86_64/usr/share/gocode/src/github.com/xtaci/kcp-go/sess_test.go:47 +0x2d2
github.com/xtaci/kcp-go.parallel_client(0xc42019e650, 0xc420202f98, 0x477172)
	/builddir/build/BUILDROOT/golang-github-xtaci-kcp-go-3.15-1.fc27.x86_64/usr/share/gocode/src/github.com/xtaci/kcp-go/sess_test.go:348 +0x26
created by github.com/xtaci/kcp-go.TestParallel1024CLIENT_64BMSG_64CNT
	/builddir/build/BUILDROOT/golang-github-xtaci-kcp-go-3.15-1.fc27.x86_64/usr/share/gocode/src/github.com/xtaci/kcp-go/sess_test.go:342 +0x7e
FAIL	github.com/xtaci/kcp-go	12.556s

About dependency library reedsolomon

I'm debian pkg maintainer of a few kcp related projects.
It's got my attention that golang-github-xtaci-kcp v3.19 changes dependency library reedsolomon from original upstream to a fork, which claims big performance boost:

So for debian pkg maintainer, before I can release debian pkg of golang-github-xtaci-kcp to v3.19, I have to package the new fork of reedsolomon, which I want to avoid.
So I created ticket to the fork and upstream, to seek the possibility of merging 2 version.

Now it seems the author of the fork isn't interested to submit the improvement patch upstream.
However the original author replied in detail and proposed his improvement:

So I want to ask whether it's possible for you, golang-github-xtaci-kcp, to change back to the original upstream of reedsolomon, of course with the patch in pull-request above.

Thanks for your understanding!

Cheers

应用层对UDP进行分段的计算规则?

@xtaci 您好,我看了下Write函数的代码,对分段的处理有点疑问:

func (s *UDPSession) Write(b []byte) (n int, err error) {
    s.mu.Lock()
    defer s.mu.Unlock()
    if s.is_closed {
        return 0, ERR_BROKEN_PIPE
    }

    max := int(s.kcp.mss * 255)
    if s.kcp.snd_wnd < 255 {
        max = int(s.kcp.mss * s.kcp.snd_wnd)
    }
    for {
        if len(b) <= max { // in most cases
            s.kcp.Send(b)
            break
        } else {
            s.kcp.Send(b[:max])
            b = b[max:]
        }
    }
    s.need_update = true
    return
}

我看了下UDP协议头结构体的定义如下:

/*UDP头定义,共8个字节*/

typedef struct _UDP_HEADER 
{
 unsigned short m_usSourPort;       // 源端口号16bit
 unsigned short m_usDestPort;       // 目的端口号16bit
 unsigned short m_usLength;        // 数据包长度16bit
 unsigned short m_usCheckSum;      // 校验和16bit
}__attribute__((packed))UDP_HEADER, *PUDP_HEADER;

其中,数据包长度用2个字节来表示,所以UDP最大载荷为65535。

kcp.mss在

func NewKCP(conv uint32, output Output) *KCP {
    kcp := new(KCP)
    kcp.conv = conv
    kcp.snd_wnd = IKCP_WND_SND
    kcp.rcv_wnd = IKCP_WND_RCV
    kcp.rmt_wnd = IKCP_WND_RCV
    kcp.mtu = IKCP_MTU_DEF
    kcp.mss = kcp.mtu - IKCP_OVERHEAD
    kcp.buffer = make([]byte, (kcp.mtu+IKCP_OVERHEAD)*3)
    kcp.rx_rto = IKCP_RTO_DEF
    kcp.rx_minrto = IKCP_RTO_MIN
    kcp.interval = IKCP_INTERVAL
    kcp.ts_flush = IKCP_INTERVAL
    kcp.ssthresh = IKCP_THRESH_INIT
    kcp.dead_link = IKCP_DEADLINK
    kcp.output = output
    return kcp
}

中看到mss的值=IKCP_MTU_DEF-IKCP_OVERHEAD=1400-24=1376。

从网上看到这样的信息:

当我们发送的UDP数据大于1472的时候会怎样呢?
这也就是说IP数据报大于1500字节,大于MTU.这个时候发送方IP层就需要分片(fragmentation).
把数据报分成若干片,使每一片都小于MTU.而接收方IP层则需要进行数据报的重组.
这样就会多做许多事情,而更严重的是,由于UDP的特性,当某一片数据传送中丢失时,接收方便
无法重组数据报.将导致丢弃整个UDP数据报。
因此,在普通的局域网环境下,我建议将UDP的数据控制在1472字节以下为好.

进行Internet编程时则不同,因为Internet上的路由器可能会将MTU设为不同的值.
如果我们假定MTU为1500来发送数据的,而途经的某个网络的MTU值小于1500字节,那么系统将会使用一系列的机
制来调整MTU值,使数据报能够顺利到达目的地,这样就会做许多不必要的操作.

鉴于Internet上的标准MTU值为576字节,所以我建议在进行Internet的UDP编程时.
最好将UDP的数据长度控件在548字节(576-8-20)以内.

我的疑问:

  1. socket的缓存和TCP/UDP(网上说内核对UDP没有缓存)和窗口个数之间是什么关系?

UDP因为是不可靠连接,不必保存应用进程的数据拷贝,应用进程中的数据在沿协议栈向下传递时,以某种形式拷贝到内核缓冲区,当数据链路层把数据传出后就把内核缓冲区中数据拷贝删除。因此它不需要一个发送缓冲区。写UDP套接口的write返回表示应用程序的数据或数据分片已经进入链路层的输出队列,如果输出队列没有足够的空间存放数据,将返回错误ENOBUFS.

  1. 如果UDP数据报文长度超过MTU,导致在IP层进行分段,而网络丢包率比较高的情况下为什么不应用层对UDP的分段设置成s.kcp.mss 而是乘以窗口数?(觉得跟上个问题有关)
    max := int(s.kcp.mss * 255)
    if s.kcp.snd_wnd < 255 {
        max = int(s.kcp.mss * s.kcp.snd_wnd)
    }

期待您的回复,非常感谢!

Question about kcp under high load (sort of high load)

Hi,
I'm trying to use kcp to speed up data transmition over network. I will post example code, that I'm using to simulate data transfer.

Common part

func configureKCPConnection(conn *kcp.UDPSession, timeout time.Duration) {
	conn.SetStreamMode(true)
	conn.SetWindowSize(512, 512)
	conn.SetNoDelay(1, 40, 2, 1)
	conn.SetACKNoDelay(false)

	conn.SetReadDeadline(time.Now().Add(timeout))
	conn.SetWriteDeadline(time.Now().Add(timeout))
}

func recvData(r io.Reader) ([]byte, error) {
	buf := make([]byte, 4)

	_, err := r.Read(buf)
	if err != nil {
		return nil, err
	}

	if len(buf) == 0 {
		return buf, nil
	}

	// read message size as 4 bytes from the beginning of the message
	size := int(binary.LittleEndian.Uint32(buf))

	mbuf := bytes.NewBuffer([]byte{})

	buf = make([]byte, 4096)

	total := 0
	for total < size {
		n, err := r.Read(buf)
		if err != nil {
			return nil, err
		}
		mbuf.Write(buf[:n])
		total += n
		if err != nil {
			break
		}
	}

	return mbuf.Bytes(), nil
}

func sendData(w io.Writer, msg []byte) (int, error) {
	b := []byte{0, 0, 0, 0}
	binary.LittleEndian.PutUint32(b, uint32(len(msg)))

	mbuf := bytes.NewBuffer(b)
	mbuf.Write(msg)

	return w.Write(mbuf.Bytes())
}

Server:

const symbols = " 0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ"

func generateRandomText(size int) string {
	b := make([]byte, size)
	for i := range b {
		b[i] = symbols[rand.Int63()%int64(len(symbols))]
	}
	return string(b)
}

func main() {
	l, err := kcp.ListenWithOptions("0.0.0.0:8100", nil, 10, 3)
	if err != nil {
		panic(err)
	}
	defer l.Close()

	timeout := time.Second * 30

	fmt.Println("running...")
	for {
		conn, err := l.AcceptKCP()
		if err != nil {
			fmt.Println(err)
			continue
		}

		configureKCPConnection(conn, timeout)
		msg, err := recvData(conn)
		if err != nil {
			fmt.Println(err)
		} else {
			fmt.Println("received:", string(msg))
		}

		n, e := sendData(conn, []byte(generateRandomText(131072)))

		if e != nil {
			fmt.Println(e)
		} else {
			fmt.Println("wrote", n, "bytes")
		}

		conn.Close()
	}
}

Client:

func main() {
	limit := 10

	for i := 0; i < limit; i++ {
		conn, err := kcp.DialWithOptions("remote_ip_addr:8100", nil, 10, 3)

		if err != nil {
			panic(err)
		}

		timeout := time.Second * 30

		configureKCPConnection(conn, timeout)

		sendData(conn, []byte("hello"))
		msg, err := recvData(conn)

		if err != nil {
			fmt.Println(err)
		} else {
			fmt.Println("message size recevied is", len(msg))
		}

		conn.Close()

		time.Sleep(time.Millisecond * 100)
	}
}

When limit in client is 1, everything is perfect, single transmission works very well.
When limit is more than 1 (10, 50, 100, 500, 1000 and so on), client periodically times out, as well as server. Timeouts happen on conn.Read (r.Read in the code of recvData method)

Questions:

  • do I use kcp correctly, assuming it provides reliability over udp for single client?
  • what should I do not to have timeouts?

meaning of LostSegs

I want to ask about meaning of LostSegs.
I find this number is different at server and client.
This is client's

KCP SNMP:&{BytesSent:100581803832 BytesReceived:19133600098 MaxConn:4 ActiveOpens:4593 PassiveOpens:0 CurrEstab:2 InErrs:0 InCsumErrors:27332 KCPInErrors:0 InPkts:34868054 OutPkts:95815874 InSegs:43088735 OutSegs:98282529 InBytes:20178092108 OutBytes:104890275086 RetransSegs:25963 FastRetransSegs:8631 EarlyRetransSegs:1151 LostSegs:16181 RepeatSegs:11135 FECRecovered:0 FECErrs:0 FECParityShards:0 FECShortShards:0}

This is server's

KCP SNMP:&{BytesSent:19168469713 BytesReceived:100765677835 MaxConn:5 ActiveOpens:0 PassiveOpens:4596 CurrEstab:2 InErrs:0 InCsumErrors:0 KCPInErrors:0 InPkts:95973816 OutPkts:35324336 InSegs:98441068 OutSegs:43562189 InBytes:103159626280 OutBytes:20957736648 RetransSegs:402863 FastRetransSegs:26026 EarlyRetransSegs:6767 LostSegs:370070 RepeatSegs:24247 FECRecovered:0 FECErrs:0 FECParityShards:0 FECShortShards:0}

And why is server's number so big?

Connections never timeout

My application level timeout kicks in after 10 minutes, which is quite a lot, yet the protocol itself doesn't seem to have any timeout controls.

同一个IP+端口的二次连接会报错

假设客户端用 123.123.123.123:6353 地址连接服务器,进行一些测试后关闭该连接。
接着进行第二次连接,还是上面的地址。这是服务端就要出错了。看了下,出错位置应该是在下面代码。
不知是否每个UDP包都会包含convid呢?

                if !ok { // new session
                    var conv uint32
                    convValid := false
                    if l.fec != nil {
                        isfec := binary.LittleEndian.Uint16(data[4:])
                        if isfec == typeData {
                            conv = binary.LittleEndian.Uint32(data[fecHeaderSizePlus2:])
                            convValid = true
                        }
                    } else {
                        conv = binary.LittleEndian.Uint32(data)
                        convValid = true
                    }

                    if convValid {
                        if s := newUDPSession(conv, l.dataShards, l.parityShards, l, l.conn, from, l.block); s != nil {
                            s.kcpInput(data)
                            l.sessions[addr] = s
                            l.chAccepts <- s
                        } else {
                            log.Println("cannot create session")
                        }
                    }
                } else {
                    s.kcpInput(data)
                }

ping包完全随机?

如下代码是发送ping包的outputTask()的片段,客户端和服务端都会发送,发送的是完全随机的字节:

        case <-ticker.C: // NAT keep-alive
            if len(s.chUDPOutput) == 0 {
                s.mu.Lock()
                interval := s.keepAliveInterval
                s.mu.Unlock()
                if interval > 0 && time.Now().After(lastPing.Add(interval)) {
                    sz := s.rng.Intn(IKCP_MTU_DEF - s.headerSize - IKCP_OVERHEAD)
                    sz += s.headerSize + IKCP_OVERHEAD
                    ping := make([]byte, sz)
                    io.ReadFull(crand.Reader, ping)
                    n, err := s.writeTo(ping, s.remote)
                    if err != nil {
                        log.Println(err, n)
                    }
                    lastPing = time.Now()
                }
            }

那么有疑问,就是ping包发送的是完全随机的字节:
假如开启了加密,好么接收端收到这些完全随机的ping包,会校验失败而忽略掉,但是ping包导致的校验错误会记录到DefaultSnmp.InCsumErrors中,这不是所期望的吧?:

    for {
        select {
        case p := <-chPacket:
            raw := p.data
            data := p.data
            from := p.from
            dataValid := false
            if l.block != nil {
                l.block.Decrypt(data, data)
                data = data[nonceSize:]
                checksum := crc32.ChecksumIEEE(data[crcSize:])
                if checksum == binary.LittleEndian.Uint32(data) {
                    data = data[crcSize:]
                    dataValid = true
                } else {
                    atomic.AddUint64(&DefaultSnmp.InCsumErrors, 1)
                }
            } else if l.block == nil {
                dataValid = true
            }
            s.kcpInput(data)

即便是开启了加密和CRC,也有极小的概率出现随机产生的数据包恰巧验证通过而被当成了正常的数据包,这样情况也是异常的。不知我的理解正不正确?

In xor.go fastXORWords(), why perform xor in 8-word batch?

Hi, I just can't understand why xor 8 words in a batch?

for i := ex; i < n; i += 8 {
_dw := dw[i : i+8]
_aw := aw[i : i+8]
_bw := bw[i : i+8]
_dw[0] = _aw[0] ^ _bw[0]
_dw[1] = _aw[1] ^ _bw[1]
.......
}

Why not this:

for i := ex; i < n; i += 1 {
dw[i] = aw[i] ^ bw[i]
}
Just a little explanation would help, compiler optimization? cpu pre-fetch? loop unrolling?

Getting 99% cpu usage, not sure what Im doing wrong

As the title suggests, I might be doing something wrong but currently kcp is not usable for how I am trying to use it.
Running a web server with kcp as the transport. Connecting to it works fine except once a client connects, the server continues to consume 99% cpu even if all clients disconnect. Doesn't seem to matter how much traffic is sent through the server.
based on the stated fact that kcp is "Compatible with net.Conn and net.Listener, easy to use",
My code is:

lis,err:=kcp.ListenWithOptions(addr,nil,10,3)
// lis,err:=net.Listen("tcp",addr)
if err!=nil {
log.Fatal("Backend failed to listen on: ",addr)
}
srv:=&http.Server{Addr: addr, Handler: proxy}
log.Fatal(srv.Serve(lis))

switching back to the tcp transport in the above code resolves the issue with the cpu usage, the cpu usage is low when handling requests and drops to 'nothing' when the clients disconnect.
Is there some default settings that might be causing this? If so do you have a way to change them easily without needing to access the individual connection objects?
Cheers for any help that can be offered.

fec 库切换

您好,将fec的库切换回来是因为之前换的库有什么问题吗?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.