Coder Social home page Coder Social logo

Comments (7)

ianlancetaylor avatar ianlancetaylor commented on May 30, 2024

Go can only expose the information that the kernel exposes. Does the Linux kernel have a way to distinguish a half-closed connection from a closed one?

from go.

shinny-chengzhi avatar shinny-chengzhi commented on May 30, 2024

Yes, epoll_wait() has a flag EPOLLRDHUP to signal that connection been half-closed. If both side called shutdown(SHUT_WR) then epoll_wait() will return with EPOLLHUP. If RST been received at any time then epoll_wait() will return with EPOLLERR.

from go.

rittneje avatar rittneje commented on May 30, 2024

Please note that when your peer closes their socket, their kernel may send FIN or RST, depending on whether there is any data in their receive buffer at that point.

If the peer sends FIN, your read will receive io.EOF, and thus this case is indistinguishable from if they call shutdown(SHUT_WR). If the peer sends RST, your read will instead receive syscall.ECONNRESET (i.e., "connection reset by peer").

from go.

antong avatar antong commented on May 30, 2024

In general, it is not possible to know whether the remote peer has closed or-half closed the connection. This is not because the kernel wouldn't expose such information, it is more fundamental. TCP doesn't have a signal for "full close". In TCP, a peer only signals that it has finished sending.

The following is a network capture of a Linux 6.8.3 host (10.198.224.251) closing a connection. The closing packet sequence is identical whether a program calls TCPConn.Close (close), it calls TCPConn.CloseWrite (shutdown(SHUT_WR)), and even when the process is killed. A single FIN packet is sent by the host that closes the connection in all three cases.

21:37:56.449578 IP 10.198.224.1.43562 > 10.198.224.251.7878: Flags [S], seq 3444407537, win 32120, options [mss 1460,sackOK,TS val 3925703771 ecr 0,nop,wscale 7], length 0
21:37:56.449632 IP 10.198.224.251.7878 > 10.198.224.1.43562: Flags [S.], seq 1100336174, ack 3444407538, win 31856, options [mss 1460,sackOK,TS val 1390367678 ecr 3925703771,nop,wscale 7], length 0
21:37:56.449663 IP 10.198.224.1.43562 > 10.198.224.251.7878: Flags [.], ack 1, win 251, options [nop,nop,TS val 3925703771 ecr 1390367678], length 0
21:38:10.462390 IP 10.198.224.251.7878 > 10.198.224.1.43562: Flags [F.], seq 1, ack 1, win 249, options [nop,nop,TS val 1390381691 ecr 3925703771], length 0
21:38:10.462690 IP 10.198.224.1.43562 > 10.198.224.251.7878: Flags [.], ack 2, win 251, options [nop,nop,TS val 3925717785 ecr 1390381691], length 0

So, even if the remote peer (10.198.224.251) has "full-closed" or even if the application has exited completely, the host 10.198.224.1 only knows that the remote peer has finished writing. The application running on 10.198.224.1 may still write to the socket without error (but if say the app has exited, this will trigger a RST and after receiving that, the next write will produce an error.)

Also, note that calling TCPConn.CloseRead, does not trigger sending a single packet.

So, I would say that this is not an issue with Go, but a feature of TCP.

Suggested approach to fix your issue: Call your proxy P, and the peers A and B: A <-> P <-> B. When P reads EOF from A (connA), call connB.CloseWrite to half-close the connection with B. Continuing copying data from connB to connA. Upon reading EOF from connB, call connA.CloseWrite. For each connection, keep track of whether the other end has closed (EOF received) and whether you have closed (CloseWrite called). When both have closed, call Close.

from go.

AGWA avatar AGWA commented on May 30, 2024

Here's an implementation of @antong's suggestion:

func proxy(aConn, bConn net.Conn) {
        var wg sync.WaitGroup
        wg.Add(2)

        go func() {
                io.Copy(aConn, bConn) // Proxy data from B to A until EOF received
                aConn.CloseWrite()    // Proxy FIN from B to A
                wg.Done()
        }()
        go func() {
                io.Copy(bConn, aConn) // Proxy data from A to B until EOF received
                bConn.CloseWrite()    // Proxy FIN from A to B
                wg.Done()
        }()
        wg.Wait()
        aConn.Close()
        bConn.Close()
}

Unfortunately, it's prone to resource leaks. Let's say the connection to B is fully closed. The first goroutine will terminate, but the second goroutine will only terminate if it receives data from A (because it will try writing it to B and get an EPIPE error). If it never receives data from A, it will never try writing to B so it will never terminate.

To fix this, you can use poll to monitor bConn for POLLHUP, which fires after a RST is received (which can happen in response to keepalives). After getting POLLHUP you interrupt the io.Copy by calling CloseRead:

func proxy(aConn, bConn net.TCPConn) {
        var wg sync.WaitGroup
        wg.Add(4)

        go func() {
                io.Copy(aConn, bConn) // Proxy data from B to A until EOF received
                aConn.CloseWrite()    // Proxy FIN from B to A
                wg.Done()
        }()
        go func() {
                io.Copy(bConn, aConn) // Proxy data from A to B until EOF received
                bConn.CloseWrite()    // Proxy FIN from A to B
                wg.Done()
        }()
        go func() {
                waitForHup(bConn)
                aConn.CloseRead() // Cause io.Copy(bConn, aConn) to return
                wg.Done()
        }()
        go func() {
                waitForHup(aConn)
                bConn.CloseRead() // Cause io.Copy(aConn, bConn) to return
                wg.Done()
        }()
        wg.Wait()
        aConn.Close()
        bConn.Close()
}
func waitForHup(conn net.Conn) error {
        syscallConn, err := conn.(syscall.Conn).SyscallConn()
        if err != nil {
                return err
        }
        var pollErr error
        if err := syscallConn.Control(func(fd uintptr) {
                for {
                        pfds := []unix.PollFd{{
                                Fd: int32(fd),
                        }}
                        if _, err := unix.Poll(pfds, -1); err == nil {
                                return
                        } else if !errors.Is(err, unix.EINTR) {
                                pollErr = err
                                return
                        }
                }
        }); err != nil {
                return err
        }
        return pollErr
}

I've successfully used this technique to write proxies in C that handle full and half closes transparently. Unfortunately, in Go, calling unix.Poll ties up an operating system thread. It would be nice if we could use netpoll for this. Even better would be a function that works like io.Copy (e.g. net.Copy) that returns when the destination connection is no longer writable (as indicated by POLLHUP).

from go.

antong avatar antong commented on May 30, 2024

Well for the example case to be a resource leak, A needs to neither read nor write forever. To handle such misbehaving peers, you need timeouts anyway.

from go.

shinny-chengzhi avatar shinny-chengzhi commented on May 30, 2024

The problem is once FIN been received and conn.Read() returned with io.EOF, golang unable to detect RST from this point on.

As a proxy the use case is:

  • client send request to proxy
  • proxy relay the request which takes a long time to process to backend server
  • client send FIN to proxy to signal that there are no more requests
  • proxy relay the FIN to backend server
  • client send RST to proxy
  • proxy should relay the RST to backend server so it can stop processing previous request immediately

Currently there is no way to relay the RST to backend server in this case, and backend server is wasting resource to processing request which is no longer needed.

Also I'm using tcp keepalive to detect dead connection, but I can't get the detection result once FIN been received.

from go.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.