Coder Social home page Coder Social logo

go-tcp-transport's Introduction

DEPRECATION NOTICE

This package has moved into go-libp2p as a sub-package, github.com/libp2p/go-libp2p/p2p/transport/tcp.

go-tcp-transport

Discourse posts Coverage Status Travis CI

A libp2p transport implementation for tcp, including reuseport socket options.

go-tcp-transport is an implementation of the libp2p transport interface that streams data over TCP/IP sockets. It is included by default in the main go-libp2p "entry point" module.

Table of Contents

Install

go-tcp-transport is included as a dependency of go-libp2p, which is the most common libp2p entry point. If you depend on go-libp2p, there is generally no need to explicitly depend on this module.

go-tcp-transport is a standard Go module which can be installed with:

go get github.com/libp2p/go-tcp-transport

This repo is gomod-compatible, and users of go 1.11 and later with modules enabled will automatically pull the latest tagged release by referencing this package. Upgrades to future releases can be managed using go get, or by editing your go.mod file as described by the gomod documentation.

Usage

TCP is one of the default transports enabled when constructing a standard libp2p Host, along with WebSockets.

Calling libp2p.New to construct a libp2p Host will enable the TCP transport, unless you override the default transports by passing in Options to libp2p.New.

To explicitly enable the TCP transport while constructing a host, use the libp2p.Transport option, passing in the NewTCPTransport constructor function:

import (
    libp2p "github.com/libp2p/go-libp2p"
    tcp "github.com/libp2p/go-tcp-transport"
)

// TCP only:
h, err := libp2p.New(
    libp2p.Transport(tcp.NewTCPTransport)
)

The example above will replace the default transports with a single TCP transport. To add multiple transports:

// TCP and QUIC:
h, err := libp2p.New(
    libp2p.Transport(tcp.NewTCPTransport),
    libp2p.Transport(quic.NewTransport), // see https://github.com/libp2p/go-libp2p-quic-transport
)

To use TCP transport options, pass them to the libp2p.Transport constructor:

h, err := libp2p.New(
    libp2p.Transport(tcp.NewTCPTransport, tcp.DisableReuseport(), tcp.WithConnectionTimeout(20*time.Second))
)

Addresses

The TCP transport supports multiaddrs that contain a tcp component, provided that there is sufficient addressing information for the IP layer of the connection.

Examples:

addr description
/ip4/1.2.3.4/tcp/1234 IPv4: 1.2.3.4, TCP port 1234
/ip6/::1/tcp/1234 IPv6 loopback, TCP port 1234

Security and Multiplexing

Because TCP lacks native connection security and stream multiplexing facilities, the TCP transport uses a transport upgrader to provide those features. The transport upgrader negotiates transport security and multiplexing for each connection according to the protocols supported by each party.

reuseport

The SO_REUSEPORT socket option allows multiple processes or threads to bind to the same TCP port, provided that all of them set the socket option. This has some performance benefits, and it can potentially assist in NAT traversal by only requiring one port to be accessible for many connections.

The reuseport functionality is provided by a seperate module, go-reuseport-transport. It is enabled by default, but can be disabled at runtime by setting the LIBP2P_TCP_REUSEPORT environment variable to false or 0.

Contribute

PRs are welcome!

Small note: If editing the Readme, please conform to the standard-readme specification.

License

MIT © Jeromy Johnson


The last gx published version of this module was: 2.0.28: QmTGiDkw4eeKq31wwpQRk5GwWiReaxrcTQLuCCLWgfKo5M

go-tcp-transport's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go-tcp-transport's Issues

Failed to get TCP info getsockopt: not implemented

After updating prysm to a recent version of libp2p, many users are reporting this log. It seems to have something to do with metrics and it is a cause for concern.

Failed to get TCP info: raw-control tcp 10.0.0.2:13000: getsockopt: not implemented

It seems to be coming from this line: https://github.com/libp2p/go-tcp-transport/blob/v0.2.4/metrics.go#L135

How can we disable this metrics collection?
What is this log about?

Tracking in Prysm here: prysmaticlabs/prysm#9733

failed to enable TCP keepalive errors clogging up logs

Saw an M1 go-ipfs user today boot up a node and saw the daemon fill up with errors as below.

2022-01-10T16:23:23.528-0600	ERROR	tcp-tpt	[email protected]/tcp.go:52	failed to enable TCP keepalive	{“error”: “set tcp4 192.168.86.30:4001->216.180.83.35:44005: setsockopt: invalid argument”}

It seems like this check isn't quite working in all situations, at least on the M1.

if errors.Is(err, os.ErrInvalid) {

Reuseport reconnect delay

For some reason, reuseport is causing reconnects to be delayed by 15 seconds, at least on localhost on my machine. This, of course, causes the reconnect to fail because our timeout is 5 seconds.

Dependabot can't parse your go.mod

Dependabot couldn't parse the go.mod found at /go.mod.

The error Dependabot encountered was:

go: github.com/libp2p/[email protected] requires
	github.com/onsi/[email protected] requires
	gopkg.in/[email protected] requires
	gopkg.in/[email protected]: invalid version: git fetch -f origin refs/heads/*:refs/heads/* refs/tags/*:refs/tags/* in /opt/go/gopath/pkg/mod/cache/vcs/9241c28341fcedca6a799ab7a465dd6924dc5d94044cbfabb75778817250adfc: exit status 128:
	fatal: The remote end hung up unexpectedly

View the update logs.

Build failure: type reuseport.Dialer has no field or method DialContext

github.com/libp2p/go-libp2p-swarm $ go version
go version go1.7.1 darwin/amd64
github.com/libp2p/go-libp2p-swarm $ go build
# github.com/libp2p/go-tcp-transport
../go-tcp-transport/tcp.go:196: d.rd.DialContext undefined (type reuseport.Dialer has no field or method DialContext)

Also doesn't look like travisci actually bothers trying to build the code.

Cleanup prometheus metrics more predictably

At the moment we have a global variable for TCP metrics

var collector *aggregatingCollector

Which has a map of connections

conns map[uint64] /* id */ *tracingConn

That we only clear out when the metrics are collected

func (c *aggregatingCollector) Collect(metrics chan<- prometheus.Metric) {

We should clear out these connections more predictably (e.g. on connection close or some background goroutine).


There might also be a bug related to the cleanup itself

go-tcp-transport/metrics.go

Lines 110 to 120 in 1b96803

var bytesSent, bytesRcvd uint64
for _, conn := range c.conns {
info, err := conn.getTCPInfo()
if err != nil {
if strings.Contains(err.Error(), "use of closed network connection") {
c.closedConn(conn)
continue
}
log.Errorf("Failed to get TCP info: %s", err)
continue
}

where if we're unable to get the TCP info we never clean it up. e.g. on Windows I see the following error:

2021-07-08T21:39:47.925-0400    ERROR   tcp-tpt [email protected]/metrics.go:118  Failed to get TCP info: raw-control tcp 192.168.1.6:4001: getsockopt: not implemented

tests failing with go-libp2p-testing v0.4.x

with v0.4.0:

=== RUN   TestTcpTransport/github.com/libp2p/go-libp2p-testing/suites/transport.SubtestStress50Conn10Stream50Msg
panic: Fail in goroutine after TestTcpTransport/github.com/libp2p/go-libp2p-testing/suites/transport.SubtestStress1Conn100Stream100Msg has completed

goroutine 221 [running]:
testing.(*common).Fail(0xc000902a80)
        /usr/local/Cellar/go/1.16.6/libexec/src/testing/testing.go:697 +0x125
testing.(*common).Error(0xc000902a80, 0xc000977f98, 0x1, 0x1)
        /usr/local/Cellar/go/1.16.6/libexec/src/testing/testing.go:797 +0x78
github.com/libp2p/go-libp2p-testing/suites/transport.echoStream(0xc000902a80, 0x1746df8, 0xc00061cd00)
        /Users/marten/src/go/pkg/mod/github.com/libp2p/[email protected]/suites/transport/stream_suite.go:109 +0x26d
created by github.com/libp2p/go-libp2p-testing/suites/transport.goServe.func1.1
        /Users/marten/src/go/pkg/mod/github.com/libp2p/[email protected]/suites/transport/stream_suite.go:145 +0x4b
exit status 2
FAIL    github.com/libp2p/go-tcp-transport      0.374s

With v0.4.1 the test SubtestStress1Conn1Stream1Msg hangs indefinitely. Apart from fixing this test failure, we probably want to add a test timeout.

Simultaneous open

We need a way to deal with and detect simultaneous open. When two computers open a TCP connection with the same 5 tuple at the same time, they both get the same connection and each ends up thinking that it's the initiator. This breaks multistream-select, stream muxers, tls (eventually), etc. Worse, it causes the current go-multistream protocol negotiation to hang (because we expect the server to send us the /multistream/1.0.0 header first and both sides think they're the client).

Multistream solution

To fix this in multistream, clients could send (without waiting for the server as they currently do):

/multistream/1.0.0
iamclient
/myprotocol/1.0.0

If the server sends back:

/multistream/1.0.0
na
/myprotocol/1.0.0

We know that we're actually the client (keep the connection).

If, instead, they send back anything other than na, we know that we both think we're the client and we kill the connection.

Unfortunately, this means we'd need to build this into multistream select but, really, we only need it for TCP.

TCP solution

We could fix this by having each side that thinks its the client send an out-of-band message (yes, these exist) on the TCP channel stating "I am the client". If we receive an "I am the client" message while we think we're the client, we kill the connection. That will allow us to avoid waiting for the connection itself to timeout.

Unfortunately, this is shitty.

Shorter Timeout

IMO, it's reasonable to assume that we don't really want to talk to a node with an RTT latency of over a few seconds.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.