Comments (7)
Go can only expose the information that the kernel exposes. Does the Linux kernel have a way to distinguish a half-closed connection from a closed one?
from go.
Yes, epoll_wait()
has a flag EPOLLRDHUP
to signal that connection been half-closed. If both side called shutdown(SHUT_WR)
then epoll_wait()
will return with EPOLLHUP
. If RST
been received at any time then epoll_wait()
will return with EPOLLERR
.
from go.
Please note that when your peer close
s their socket, their kernel may send FIN or RST, depending on whether there is any data in their receive buffer at that point.
If the peer sends FIN, your read
will receive io.EOF
, and thus this case is indistinguishable from if they call shutdown(SHUT_WR)
. If the peer sends RST, your read
will instead receive syscall.ECONNRESET
(i.e., "connection reset by peer").
from go.
In general, it is not possible to know whether the remote peer has closed or-half closed the connection. This is not because the kernel wouldn't expose such information, it is more fundamental. TCP doesn't have a signal for "full close". In TCP, a peer only signals that it has finished sending.
The following is a network capture of a Linux 6.8.3 host (10.198.224.251) closing a connection. The closing packet sequence is identical whether a program calls TCPConn.Close (close), it calls TCPConn.CloseWrite (shutdown(SHUT_WR)), and even when the process is killed. A single FIN packet is sent by the host that closes the connection in all three cases.
21:37:56.449578 IP 10.198.224.1.43562 > 10.198.224.251.7878: Flags [S], seq 3444407537, win 32120, options [mss 1460,sackOK,TS val 3925703771 ecr 0,nop,wscale 7], length 0
21:37:56.449632 IP 10.198.224.251.7878 > 10.198.224.1.43562: Flags [S.], seq 1100336174, ack 3444407538, win 31856, options [mss 1460,sackOK,TS val 1390367678 ecr 3925703771,nop,wscale 7], length 0
21:37:56.449663 IP 10.198.224.1.43562 > 10.198.224.251.7878: Flags [.], ack 1, win 251, options [nop,nop,TS val 3925703771 ecr 1390367678], length 0
21:38:10.462390 IP 10.198.224.251.7878 > 10.198.224.1.43562: Flags [F.], seq 1, ack 1, win 249, options [nop,nop,TS val 1390381691 ecr 3925703771], length 0
21:38:10.462690 IP 10.198.224.1.43562 > 10.198.224.251.7878: Flags [.], ack 2, win 251, options [nop,nop,TS val 3925717785 ecr 1390381691], length 0
So, even if the remote peer (10.198.224.251) has "full-closed" or even if the application has exited completely, the host 10.198.224.1 only knows that the remote peer has finished writing. The application running on 10.198.224.1 may still write to the socket without error (but if say the app has exited, this will trigger a RST and after receiving that, the next write will produce an error.)
Also, note that calling TCPConn.CloseRead, does not trigger sending a single packet.
So, I would say that this is not an issue with Go, but a feature of TCP.
Suggested approach to fix your issue: Call your proxy P, and the peers A and B: A <-> P <-> B. When P reads EOF from A (connA), call connB.CloseWrite to half-close the connection with B. Continuing copying data from connB to connA. Upon reading EOF from connB, call connA.CloseWrite. For each connection, keep track of whether the other end has closed (EOF received) and whether you have closed (CloseWrite called). When both have closed, call Close.
from go.
Here's an implementation of @antong's suggestion:
func proxy(aConn, bConn net.Conn) {
var wg sync.WaitGroup
wg.Add(2)
go func() {
io.Copy(aConn, bConn) // Proxy data from B to A until EOF received
aConn.CloseWrite() // Proxy FIN from B to A
wg.Done()
}()
go func() {
io.Copy(bConn, aConn) // Proxy data from A to B until EOF received
bConn.CloseWrite() // Proxy FIN from A to B
wg.Done()
}()
wg.Wait()
aConn.Close()
bConn.Close()
}
Unfortunately, it's prone to resource leaks. Let's say the connection to B is fully closed. The first goroutine will terminate, but the second goroutine will only terminate if it receives data from A (because it will try writing it to B and get an EPIPE
error). If it never receives data from A, it will never try writing to B so it will never terminate.
To fix this, you can use poll
to monitor bConn
for POLLHUP
, which fires after a RST is received (which can happen in response to keepalives). After getting POLLHUP
you interrupt the io.Copy
by calling CloseRead
:
func proxy(aConn, bConn net.TCPConn) {
var wg sync.WaitGroup
wg.Add(4)
go func() {
io.Copy(aConn, bConn) // Proxy data from B to A until EOF received
aConn.CloseWrite() // Proxy FIN from B to A
wg.Done()
}()
go func() {
io.Copy(bConn, aConn) // Proxy data from A to B until EOF received
bConn.CloseWrite() // Proxy FIN from A to B
wg.Done()
}()
go func() {
waitForHup(bConn)
aConn.CloseRead() // Cause io.Copy(bConn, aConn) to return
wg.Done()
}()
go func() {
waitForHup(aConn)
bConn.CloseRead() // Cause io.Copy(aConn, bConn) to return
wg.Done()
}()
wg.Wait()
aConn.Close()
bConn.Close()
}
func waitForHup(conn net.Conn) error {
syscallConn, err := conn.(syscall.Conn).SyscallConn()
if err != nil {
return err
}
var pollErr error
if err := syscallConn.Control(func(fd uintptr) {
for {
pfds := []unix.PollFd{{
Fd: int32(fd),
}}
if _, err := unix.Poll(pfds, -1); err == nil {
return
} else if !errors.Is(err, unix.EINTR) {
pollErr = err
return
}
}
}); err != nil {
return err
}
return pollErr
}
I've successfully used this technique to write proxies in C that handle full and half closes transparently. Unfortunately, in Go, calling unix.Poll
ties up an operating system thread. It would be nice if we could use netpoll for this. Even better would be a function that works like io.Copy
(e.g. net.Copy
) that returns when the destination connection is no longer writable (as indicated by POLLHUP
).
from go.
Well for the example case to be a resource leak, A needs to neither read nor write forever. To handle such misbehaving peers, you need timeouts anyway.
from go.
The problem is once FIN
been received and conn.Read()
returned with io.EOF
, golang unable to detect RST
from this point on.
As a proxy the use case is:
- client send request to proxy
- proxy relay the request which takes a long time to process to backend server
- client send
FIN
to proxy to signal that there are no more requests - proxy relay the
FIN
to backend server - client send
RST
to proxy - proxy should relay the
RST
to backend server so it can stop processing previous request immediately
Currently there is no way to relay the RST
to backend server in this case, and backend server is wasting resource to processing request which is no longer needed.
Also I'm using tcp keepalive to detect dead connection, but I can't get the detection result once FIN
been received.
from go.
Related Issues (20)
- cmd/link/internal/ld: TestElfBindNow/bindnow-pie-linkmode-external failures HOT 1
- runtime/cgo: cgo can't work with some old versions glibc HOT 2
- make.bash: ~/go1.4/bin/go: no such file or directory after CL 582076 HOT 3
- net/http: infinite redirect on path variable followed by trailing slash
- x/tools/gopls: export LSP types so they can be imported by external consumers HOT 2
- x/website: add more detail about GOPATH and where to clone src code in contribution guide
- cmd/go: for every test run in go/testdata/script results in windows command prompt rapidly opens and closes HOT 2
- cmd/go: internal compiler error: assertion failed: on source consisting of {panic, declaration, label} HOT 2
- math: math.Mod(Exp(63.5)*10000.0, 100.0) return different values between C exp() HOT 1
- x/tools/gopls: add control flow token support HOT 2
- x/net/http2: requests experience excessive time delays when encountering network errors HOT 1
- net/http: Inconsistent output with req.URL.RequestURI() HOT 1
- x/tools/gopls: failed to install gopls@latest with VSCode extension and Go 1.18 HOT 3
- cmd/compile: different results from runtime.Caller HOT 4
- proposal: x/sys/unix: add MmapPtr/MunmapPtr HOT 1
- runtime: unexpected return pc for github.com/pact-foundation/pact-go HOT 4
- x/net/http2: h2c requests can't be sent when *http.Request.Close is true HOT 1
- x/crypto/ssh: new API to allow user to get channel close event HOT 1
- wasi: net implement sockets HOT 7
- x/tools/gopls/internal/test/marker: tests are timing out on freebsd HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from go.