Coder Social home page Coder Social logo

tox's Introduction

Tox

Github Build Status Coverage Status Docs Current Crates.io Version Join Gitter

This library is an implementation of toxcore in Rust - P2P, distributed, encrypted, easy to use DHT-based network.

Reference

The Tox Reference should be used for implementing toxcore in Rust. Reference source repository.

If existing documentation appears to not be complete, or is not clear enough, issue / pull request should be filled on the reference repository.

Contributions

...are welcome. ๐Ÿ˜„ For details, look at CONTRIBUTING.md.

Building

Fairly simple. First, install Rust >= 1.65 and a C compiler (Build Tools for Visual Studio on Windows, GCC or Clang on other platforms).

Then you can build the debug version with

cargo build

To run tests, use:

cargo test

To build docs and open them in your browser:

cargo doc --open

With clippy

To check for clippy warnings (linting), you need nightly Rust with clippy-preview component.

To check:

cargo clippy --all

To check with tests:

cargo clippy --all --tests

Goals

  • improved toxcore implementation in Rust
  • Rust API
  • documentation
  • tests
  • more

Progress

A fully working tox-node written in pure Rust with a DHT server and a TCP relay can be found here.

Right now we are working on the client part.

Authors

zetox was created by Zetok Zalbavar (zetok/openmailbox/org) and assimilated by the tox-rs team.

tox-rs has contributions from many users. See AUTHORS.md. Thanks everyone!

License

Licensed under GPLv3+ with Apple app store exception.

tox's People

Contributors

0623forbidden avatar anthonymikh avatar cronokirby avatar inosms avatar kpp avatar kurnevsky avatar mkopus avatar namsoocho avatar nokaa avatar olitvin avatar quininer avatar senia-psm avatar sudden6 avatar suhr avatar zetok avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tox's Issues

NodesResponse.ping_id does not match

recv = NodesResponse(NodesResponse { pk: PublicKey([179, 160, 108, 97, 254, 215, 102, 208, 230, 76, 46, 37, 46, 197, 5, 194, 2, 158, 217, 221, 21, 38, 183, 136, 148, 198, 169, 132, 246, 99, 33, 119]), nonce: Nonce([144, 163, 128, 147, 129, 219, 22, 70, 109, 236, 128, 59, 195, 95, 232, 211, 233, 107, 74, 123, 163, 127, 33, 129]), payload: [60, 47, 203, 1, 64, 10, 175, 60, 182, 142, 187, 244, 165, 151, 148, 217, 147, 209, 142, 77, 200, 127, 229, 165, 222, 49, 39, 90, 242, 205, 134, 2, 233, 49, 83, 11, 139, 224, 54, 237, 51, 101, 226, 229, 91, 12, 180, 207, 209, 40, 71, 86, 32, 149, 140, 211, 51, 126, 100, 123, 4, 116, 47, 7, 230, 42, 16, 172, 194, 214, 222, 45, 87, 8, 202, 93, 218, 36, 151, 195, 247, 248, 80, 2, 140, 96, 65, 36, 130, 200, 16, 172, 67, 27, 191, 30, 33, 99, 54, 158, 83, 3, 85, 144, 167, 197, 31, 118, 154, 60, 203, 56, 233, 196, 193, 188, 188, 107, 240, 52, 93, 33, 101, 204, 104, 138, 136, 158, 18, 102, 247, 8, 120, 92, 95, 109, 23, 122, 32, 51, 202, 243, 203, 181, 18, 103, 175, 173, 103, 46, 248, 246, 30, 237, 54, 179, 83, 246, 129, 90, 179, 230, 132, 47, 133, 68, 168, 15, 222, 151, 222, 205, 142, 200, 80, 100, 207, 75, 204, 130, 138] })
ERROR 2018-06-26T20:52:37Z: dht_server: failed to handle packet: Custom { kind: Other, error: StringError("NodesResponse.ping_id does not match") }
recv = NodesResponse(NodesResponse { pk: PublicKey([179, 160, 108, 97, 254, 215, 102, 208, 230, 76, 46, 37, 46, 197, 5, 194, 2, 158, 217, 221, 21, 38, 183, 136, 148, 198, 169, 132, 246, 99, 33, 119]), nonce: Nonce([198, 115, 197, 153, 237, 174, 128, 121, 81, 36, 134, 97, 253, 253, 42, 105, 5, 200, 89, 54, 16, 58, 38, 131]), payload: [237, 6, 122, 189, 60, 228, 23, 242, 65, 25, 156, 137, 60, 207, 152, 236, 120, 124, 205, 185, 81, 251, 57, 118, 116, 236, 4, 111, 162, 163, 146, 84, 95, 44, 57, 199, 63, 96, 77, 142, 118, 238, 46, 147, 176, 247, 45, 151, 8, 25, 31, 8, 21, 57, 146, 250, 98, 53, 51, 234, 117, 108, 13, 80, 200, 252, 118, 80, 227, 194, 177, 112, 179, 15, 167, 52, 148, 5, 30, 156, 12, 37, 115, 4, 34, 27, 97, 231, 24, 232, 189, 148, 176, 161, 82, 243, 13, 236, 196, 77, 47, 222, 36, 76, 88, 190, 222, 255, 199, 188, 127, 67, 58, 98, 56, 149, 210, 177, 227, 118, 100, 26, 253, 191, 86, 243, 121, 102, 120, 35, 192, 16, 136, 167, 230, 82, 165, 50, 190, 127, 48, 79, 77, 241, 205, 244, 127, 33, 38, 247, 77, 202, 145, 136, 19, 83, 5, 48, 218, 145, 16, 193, 157, 10, 173, 248, 6, 253, 219, 182, 135, 66, 77, 199, 215, 237, 12, 66, 224, 244, 151] })
ERROR 2018-06-26T20:52:37Z: dht_server: failed to handle packet: Custom { kind: Other, error: StringError("NodesResponse.ping_id does not match") }
recv = NodesResponse(NodesResponse { pk: PublicKey([179, 160, 108, 97, 254, 215, 102, 208, 230, 76, 46, 37, 46, 197, 5, 194, 2, 158, 217, 221, 21, 38, 183, 136, 148, 198, 169, 132, 246, 99, 33, 119]), nonce: Nonce([178, 10, 188, 182, 250, 112, 242, 16, 123, 140, 237, 195, 23, 69, 3, 89, 63, 1, 178, 64, 226, 134, 152, 167]), payload: [153, 244, 159, 21, 180, 79, 99, 162, 128, 210, 132, 140, 37, 153, 170, 213, 63, 106, 155, 219, 14, 172, 222, 190, 52, 52, 125, 132, 123, 14, 210, 147, 53, 69, 150, 123, 206, 114, 51, 85, 139, 74, 36, 153, 234, 29, 116, 127, 166, 230, 225, 171, 44, 133, 237, 92, 35, 138, 130, 128, 230, 2, 142, 60, 206, 186, 24, 172, 39, 67, 13, 90, 142, 209, 102, 30, 199, 191, 89, 80, 236, 234, 203, 120, 200, 149, 162, 223, 22, 76, 200, 79, 228, 214, 231, 251, 20, 139, 78, 176, 147, 158, 135, 155, 200, 25, 238, 141, 186, 16, 169, 248, 109, 20, 246, 148, 229, 161, 104, 189, 118, 88, 118, 22, 240, 14, 180, 237, 13, 175, 115, 58, 36, 32, 18, 151, 162, 215, 192, 21, 182, 223, 6, 0, 105, 59, 32, 157, 179, 216, 0, 96, 118, 116, 185, 116, 20, 34, 225, 127, 99, 55, 186, 235, 114, 93, 169, 168, 19, 159, 114, 84, 27, 86, 237, 135, 47, 82, 151, 234, 255] })
ERROR 2018-06-26T20:52:37Z: dht_server: failed to handle packet: Custom { kind: Other, error: StringError("NodesResponse.ping_id does not match") }

AWTCY (AreWeToxClientYet)?

Hi.

Thanks for this project! ๐Ÿ‘

So, is there any client like qTox/uTox using tox-rs? If so, could you share its link?!

Thank you!

Use lazy_static

Functions of DHT have many timeouts and intervals.
These values are passed as parameters from main() function.

I am considering lazy_static crate.

The code will be something like this.


lazy_static! {
    /// timeout for remove clients in peers_cache
    pub static ref KILL_NODE_TIMEOUT: Mutex = Mutex::new(Duration::from_secs(0));
    /// timeout for PingRequest and NodesRequest
    pub static ref PING_TIMEOUT: Mutex = Mutex::new(Duration::from_secs(0));
    /// interval for ping
    pub static ref PING_INTERVAL: Mutex = Mutex::new(Duration::from_secs(0));
    /// timeout for node is offline or not
    pub static ref BAD_NODE_TIMEOUT: Mutex = Mutex::new(Duration::from_secs(0));
    /// interval for random NodesRequest
    pub static ref NODES_REQ_INTERVAL: Mutex = Mutex::new(Duration::from_secs(0));
    /// interval or NatPingRequest
    pub static ref NAT_PING_REQ_INTERVAL: Mutex = Mutex::new(Duration::from_secs(0));
}

/// set static variables
pub fn set_config_values(args: ConfigArgs) {
    *KILL_NODE_TIMEOUT.lock().unwrap() = Duration::from_secs(args.kill_node_timeout);
    *PING_TIMEOUT.lock().unwrap() = Duration::from_secs(args.ping_timeout);
    *PING_INTERVAL.lock().unwrap() = Duration::from_secs(args.ping_interval);
    *BAD_NODE_TIMEOUT.lock().unwrap() = Duration::from_secs(args.bad_node_timeout);
    *NODES_REQ_INTERVAL.lock().unwrap() = Duration::from_secs(args.nodes_req_interval);
    *NAT_PING_REQ_INTERVAL.lock().unwrap() = Duration::from_secs(args.nat_ping_req_interval);
}

Using lazy_static has some benefits.

  • no need to pass parameters
  • can change value for test functions

But, there is serious problem.

  • can't run test functions concurrently, because test functions are threads and lazy_static STATIC variables are shared.

Which choice will be better?

DHT Node human documentation.

Description for dht_node

sk : Secret Key of this DHT node

pk : Public Key of this DHT node

  • sk and pk are pair of keys, and made when DhtNode is created.

kbucket : close peers list which contains PackedNode objects close to this DhtNode pk

getn_timeout : timeout queue for NodesRequest, when a node send a NodesRequest packet to a peer, it wait for response(NodesResponse) for 122 seconds.

This timeout queue will be replaced by future timer event.

If the response(NodesResponse) doesn't arrive for 122 seconds(timeout), the node consider the peer as offline and remove it from kbucket.

precomputed_cache : An hashmap for precomputed keys.

Precomputed keys are used in various place, so redundunt computing should be avoided.

When precomputed key is needed, first search this cache,

if the key exists in cache, the found key is used,

if not, new precomputed key is computed and stored in cache for later use.

peers : An hashmap for tx part of MPSC channel,

this channel is used for communication to peer via udp socket

ping_cache : An hashmap for ping_ids, a ping_id per a peer

nodes_cache : An hashmap for NodesRequest.ids, an id per a peer

nat_ping_cache : An hashmap for NatPingRequest.ping_ids, a ping_id per a peer

DHT Operation

When startup, DHT node sends NodesRequest packet to it's known friends list.

If the known friend is online, it responds with NodesResponse packet.

  • DHT node listen on udp socket address
  • For each incoming udp packet
    • register peer : register peer address and tx part of MPSC channel.
    • handle packet : process packet and respond if needed.

register peer : create MPSC channel. And then register tx part of channel to peers cache.

handle packet : process packet on it's packet kind and respond if needed. Packet kind is

  • PingRequest
  • PingResponse
  • NodesRequest
  • NodesResponse
  • DhtRequest
    • NatPingRequest
    • NatPingResponse

Ping Request

PingRequest : Every 60 seconds, DHT node send PingRequest packet to peers to check whether it is alive.

When peer receives PingRequest, create_ping_resp(received PingRequest) is called.

create_ping_resp() does:

  • decrypt received PingRequest packet's encrypted payload
  • by calling get_payload(SK) decryption is done.
  • if decryption == success, make PingResponsePayload which has same ping_id
  • get precomputed symmetric key using get_symmetric_key() which cache precomputed key.
  • for caching precomputed key, DHT node uses HashMap named precomputed_cache
  • call PingResponse::new(PingResponsePayload) to make encrypted PingResponse packet
  • PingResponse packet is sent to peer by calling send_to_peer(packet)

get_payload() does:

  • decrypt packet's payload using SK and PK pair
  • decrypted payload is bytes slice
  • to get PingRequestPayload object call from_bytes(decrypted bytes slice)
  • if from_bytes() == success return the object

get_symmetric_key() does:

  • search hashmap using key as (SK, PK) to find values which is precomputed key
  • if search == success, return the precomputed key
  • if search != success, create new precomputed key for (SK, PK)
  • insert new precomputed key into hashmap for later use
  • return new precomputed key.

PingResponse::new() does:

  • call to_bytes() to get bytes slice from PingResponsePayload object
  • encrypt bytes slice using precomputed key
  • return new PingResponse object

send_to_peer() does:

  • search hashmap to find tx part of MPSC channel for peer using hashmap key as peer's udp socket address
  • if search == success, write packet on tx
  • if search fails, the peer is not registered, so error return

DHT node stores 8 nodes closest to each of the public keys in its DHT friends list.

PingRequest has a ping-id which provide a method to check the response(PingResponse) is correct.

Ping Response

PingResponse : When a node receives a PingRequest packet, it responds with this packet.

When a node sent PingRequest receives PingResponse, future timeout completes and add or update peer's Socket address in kbucket and known friends list.

If PingResponse doesn't arrive for 122 seconds, future timeout results in error, then the peer removed from kbucket and marked as offline if the peer is known friend.

When a peer receives PingResponse, it calls handle_ping_resp()

handle_ping_resp() does:

  • decrypt received packet's payload by calling get_payload()
  • DhtNode struct has a hashmap to hold a ping_id per a peer
  • search the hashmap to find ping_id last sent to peer
  • check if hashmap.ping_id == PinResponse.ping_id
  • if the check is true, then the timeout future completes
  • if check is not true, then the PingResponse is dropped as if it received nothing

Nodes Request

NodesRequest : Every 20 seconds, DHT node sends NodesRequest packet to a random node in kbucket and its known friends list.

DHT node runs future interval timer to send periodical packets. One of this interval timers is NodesRequest timer.

When NodesRequest timer expires:

  • select random peer in kbucket and known friends list
  • call create_nodes_req()
  • send packet to the peer

create_nodes_req() does:

  • create NodesRequestPayload struct object
  • call get_symmetric_key() to get precomputed key
  • call NodesRequest::new(NodesRequestPayload) to create new object
  • return NodesRequest object created

NodesRequest::new() does:

  • call to_bytes() to get bytes slice from NodesRequestPayload struct object
  • encrypt bytes slice using precomputed key
  • create and return new NodesRequest objcet

Nodes Response

When receiving NodesRequest:

  • call create_nodes_res()
  • if create_ndoes_res() result in success, then call send_to_peer()

create_nodes_res() does:

  • call get_payload() to get decrypted NodesRequestPaylaod object
  • call response(kbucket) to get NodesResponsePayload object
  • call NodesResponse::new() to get new object and return the object

response() does:

  • call get_closest(PK) to get up to 4 packed nodes closest to PK
  • call with_nodes(packed nodes vector)
  • return the result of with_nodes() which is NodesResponsePayload struct

get_closest() does:

  • create new bucket object which has capacity of 4
  • iterate all nodes in kbucket calling try_add(PK, node)
  • return bucket object which has up to 4 entry

try_add() does:

  • parameters are base_PK and new_node
  • if new_node is closer than current bucket entry, it will be added or replaced

with_nodes() does:

  • check if packed nodes vector is empty or more than 4 entry
  • if true, then return None
  • if not true, then create and return NodesResponsePayload object

NodesResponse::new() does:

  • call to_bytes() to get bytes slice of NodesResponsePayload
  • encrypt the bytes slice
  • create and return new NodesResponse objcet

When a node sent NodesRequest receives NodesResponse packet,

the timeout future completes and add or update peer's Socket address in kbucket or known friends list.

If NodesResponse doesn't arrive for 122 seconds,

the future timer result in error, then the peer removed from kbucket or marked as offline if the peer is known friend.

When receiving NodesResponse: handle_nodes_resp() is called.

handle_nodes_resp() does:

  • call get_payload() to get decrypted payload
  • timeout future completes
  • call try_add() to add up to 4 nodes contained in payload of NodesResponse packet

try_add() for kbucket does:

  • calculate kbucket index to get proper bucket to add new node
  • on calculated bucket call try_add() for bucket

*note : kbucket is a vector of bucket, bucket is a vector of PackedNode.

DhtRequest

When hole-punching starts, every 3 seconds NatPingRequest packet is sent to known friend which is not connected directly.

If there is no response for 6 seconds, hole-punching will stop.

When a peer receives NatPingRequest from a friend, peer checks the Receiver's PK with own DHT PK.

If the two PKs are same, peer responds with NatPingResponse.

If not, peer search its kbucket and known friends list to find the PK which is same with Receiver's PK of NatPingRequest.

If peer found the PK in kbucket or friends list, it resends NatPingRequest to that peer.

NatPingRequest

When future timer inverval for NatPingRequest expires, DHT node calls create_nat_ping_req() and calls sent_to_peer().

create_nat_ping_req() does:

  • create NatPingRequestPayload object with random ping_id
  • call NatPingRequest::new() to encrypt payload and to get NatPingRequest object
  • call send_to_peer() to send packet to peer via udp socket

When a peer receives NatPingRequest, it calls handle_nat_ping_req().

handle_nat_ping_req() does:

  • call check_pk() to check whether received packet's PK and own node's PK is same
  • if the two PKs is same, then call create create_nat_ping_resp() to make response packet and send_to_peer() is called
  • if not, then call search_matching_peer() to find peer which has same PK with received packet's PK
  • if search_matching_peer() is success, then resend NatPingRequest packet to that peer by calling send_to_peer()
  • if not, node drops NatPingRequest packet, and the effect of receiving NatPingRequest is nothing

create_nat_ping_resp() does:

  • call get_payload(SK) to decrypt packet's payload
  • if decryption == success, make NatPingResponsePayload which has same ping_id
  • call get_symmetric_key() to get precomputed key
  • call NatPingResponse::new(NatPingResponsePayload) to make encrypted NatPingResponse packet
  • NatPingResponse packet is sent to peer by calling send_to_peer(packet)

search_matching_peer() does:

  • iterate friends list to check whether received PK and friend PK is same
  • if the two PKs are same return the friend's socket address
  • if not, iterate kbuck to check whether received PK and node's PK in kbucket is same
  • if the two PKs are same return the node's socket address
  • if not, return None

NatPingResponse

When a node sent NatPingRequest receives NatPingResponse calls handle_nat_ping_resp().

handle_nat_ping_resp() does:

  • call get_payload() to get returned ping_id
  • check whether sent ping_id and returned ping_id is same
  • if same, future timer completes and start hole-punching
  • if not, drop the received packet, the effect of receiving packet is nothing

Lan Discovery segfault

When running $ cargo run --example dht_node

Starting program: /home/humbug/zetox/target/debug/examples/dht_node 
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
[New Thread 0x7ffff69ff700 (LWP 25611)]
[New Thread 0x7ffff63ff700 (LWP 25612)]
[New Thread 0x7ffff5dff700 (LWP 25613)]
[New Thread 0x7ffff57ff700 (LWP 25614)]
nat_wakeup
recv = DhtRequest(DhtRequest { rpk: PublicKey([15, 107, 126, 130, 81, 55, 154, 157, 192, 117, 0, 225, 119, 43, 48, 117, 84, 109, 112, 57, 243, 216, 4, 171, 185, 111, 33, 146, 221, 31, 77, 118]), spk: PublicKey([186, 227, 149, 99, 76, 17, 146, 164, 92, 0, 135, 84, 26, 241, 252, 149, 204, 231, 16, 144, 3, 178, 95, 170, 117, 146, 85, 171, 55, 236, 155, 123]), nonce: Nonce([125, 237, 186, 21, 65, 189, 85, 24, 226, 60, 155, 217, 29, 118, 253, 5, 234, 67, 113, 199, 74, 13, 217, 251]), payload: [231, 125, 48, 121, 172, 244, 85, 240, 54, 170, 147, 106, 183, 87, 71, 69, 128, 26, 136, 200, 225, 140, 143, 26, 216, 151] })
nat_wakeup
recv = DhtRequest(DhtRequest { rpk: PublicKey([15, 107, 126, 130, 81, 55, 154, 157, 192, 117, 0, 225, 119, 43, 48, 117, 84, 109, 112, 57, 243, 216, 4, 171, 185, 111, 33, 146, 221, 31, 77, 118]), spk: PublicKey([186, 227, 149, 99, 76, 17, 146, 164, 92, 0, 135, 84, 26, 241, 252, 149, 204, 231, 16, 144, 3, 178, 95, 170, 117, 146, 85, 171, 55, 236, 155, 123]), nonce: Nonce([139, 241, 179, 179, 95, 133, 79, 59, 229, 252, 46, 221, 122, 137, 147, 157, 216, 65, 94, 219, 204, 64, 143, 160]), payload: [215, 123, 23, 74, 183, 63, 237, 66, 78, 129, 78, 192, 192, 112, 250, 21, 104, 197, 20, 103, 168, 32, 50, 159, 139, 255] })
nat_wakeup
recv = DhtRequest(DhtRequest { rpk: PublicKey([15, 107, 126, 130, 81, 55, 154, 157, 192, 117, 0, 225, 119, 43, 48, 117, 84, 109, 112, 57, 243, 216, 4, 171, 185, 111, 33, 146, 221, 31, 77, 118]), spk: PublicKey([186, 227, 149, 99, 76, 17, 146, 164, 92, 0, 135, 84, 26, 241, 252, 149, 204, 231, 16, 144, 3, 178, 95, 170, 117, 146, 85, 171, 55, 236, 155, 123]), nonce: Nonce([40, 29, 28, 143, 170, 224, 191, 18, 6, 16, 7, 201, 203, 180, 240, 78, 11, 57, 173, 78, 85, 56, 195, 136]), payload: [87, 64, 179, 28, 187, 46, 104, 118, 191, 40, 161, 20, 177, 51, 213, 26, 148, 60, 216, 224, 132, 16, 13, 45, 119, 5] })
lan_wakeup

Thread 4 "tokio-runtime-w" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffff5dff700 (LWP 25613)]
0x00005555557e8e03 in get_if_addrs::getifaddrs_posix::sockaddr_to_ipaddr::haaf65eefab8dbaee (sockaddr=0x0)
    at /home/humbug/.cargo/registry/src/github.com-1ecc6299db9ec823/get_if_addrs-0.5.0/src/lib.rs:163
163	        let sa_family = u32::from(unsafe { *sockaddr }.sa_family);
   โ”‚130         // get broadcast addresses for host's network interfaces                                                   โ”‚
   โ”‚131         fn get_ipv4_broadcast_addrs() -> Vec<IpAddr> {                                                             โ”‚
  >โ”‚132             let ifs = get_if_addrs::get_if_addrs().expect("no network interface");                                 โ”‚
   โ”‚133             ifs.iter().filter_map(|interface|                                                                      โ”‚
   โ”‚134                 match interface.addr {                                                                             โ”‚
   โ”‚135                     IfAddr::V4(ref addr) => addr.broadcast,                                                        โ”‚
   โ”‚136                     _ => None,                                                                                     โ”‚
   โ”‚137             })                                                                                                     โ”‚
   โ”‚138             .map(|ipv4|                                                                                            โ”‚
   โ”‚139                 IpAddr::V4(ipv4)                                                                                   โ”‚
   โ”‚140             ).collect()                                                                                            โ”‚
   โ”‚141         }       

Ping Array Human Doc.

Description of Ping Array

1. Purpose

When Dht node runs, it receives Lan Discovery packet.
Usually, One Machine has many logical Tcp/Ip interface.
Some ars real Ethernet Network Card, some a logical interface such as loopback, logical bridge for Virtual Machines.
So, One tox client program sends many LanDiscovery packets to same IP address.
When one node receives LanDiscovery packet, it responds with NodesRequest packet.
Many LanDiscovery packets receiving make to send many NodesRequest packets.
There is needs to maintain Ping_id using queue to verify correctly incoming NodesResponse.

2. Ping Array's processing when NodesRequest

When one node sends a NodesRequest to a peer, it insert a entry into PingArray using ping_array_add(PackedNode).

- ping_array_add(PackedNode) does

    - get ping_id: u64 = random_u64()
    - get index = adding_index % total_size where total_size is the size of this PingArray's array.
    - get ping_id = ping_id - (ping_id % total_size)
    - ping_id = ping_id + index
    - adding_index = adding_index + 1

Now, ping_id will looks like this:
- xxxxxxxxxxx00001
- xxxxxxxxxxx00002
- xxxxxxxxxxx00003
.
.
.
.
- xxxxxxxxxxx99999
- xxxxxxxxxxx00000
- xxxxxxxxxxx00001
- xxxxxxxxxxx00002

where 'xxx' means random numbers.

    - Insert ping_id as key, PackedNode{pk,SocketAddr} as value into HashMap<key, value>

3. Ping Array's processing when NodesResponse

When one node receives NodesResponse packet, it checks ping_id using PingArray's function ping_array_check(ping_id).

- ping_array_check(ping_id) does

    - get <K,V> from HashMap using ping_id as a key.
    - if Key don't exist in HashMap, it is ether invalid ping_id or timed out ping_id.
    - if Key exists in HashMap, get the PackedNode which is a Value of hashmap.
    - return the Value which is a PackedNode.
    - NodesResponse handler can check this PackedNode's PK, and SocketAddr with received Packet's PK and UDP address

4. Usage of Ping Array

Ping Array is a sturcture.
Many PingArray object can be created and used.

NodesReq/Resp can use one PingArray object.
NatPingReq/Resp can use another PingArray object.
Other Send/Recv packet pairs can use separate PingArrays.

Each PingArray object can set it's own timeout period.
NodesReq/Resp sets it 5 seconds.
PingRequest/Resp may set it 60 seconds.

TCP connections human doc.

TCP connections

TCP connectins resides between net_crypto and tcp_client.
It serves to net_crypto using tcp_client.
It provides reliable connection to net_crypto via multiple tcp relays to a friend.
When a Tox client connects to a friend via tcp relay, normally 3 redundant connections are established.
One connection is used for data send/recv, 2 others are backup.
In toxcore maximum number of redundant connections is 6.
TCP connection can get into sleep mode.
Getting into sleep mode can occur when UDP connecion is established, because Tox prefer UDP than TCP.
When established UDP connection is disabled, TCP connecions are awaken.

Defintion of names.

- Connections has
    -  set of ConnOfClient(HashMap)
    -  set of Connection(HashMap)
- ConnOfClient has
    - handle of TcpClient connection
    - IP, port, PK : to save for sleeping status
- Connection has
    - handles of 3 to 6 ConnToRelay connections for redundancy
- ConnToRelay has
    - id as a key to ConnOfClient hashmap
    - connection_id of Routing Response packet
  • do_tcp_nections : main function of tcp_connections module called by messenger periodically

    • iterate all ConnOfClient in ConnOfClients
      • if ConnOfClient is not SLEEPING
        • call do_tcp_connection in TcpClient module
        • check if TcpClient connection is DISCONNECTED
          • check if ConnOfClient to the friend is CONNECTED
            • reconnect to the relay
          • else
            • kill the relay
        • check if ConnOfClient is not CONFIRMED and TcpClient connection is CONFIRMED
          • send routing request
          • set connection status to CONNECTED
        • check if SLEEP conditions is true
          • get into sleep
      • else if wakeup conditions is true
        • wakeup ConnOfClient
    • loop end
    • delete unused connections
  • end

  • reconnect to relay

    • check if ConnOfClient is SLEEPING then
      • return
    • get IP, port, PK of connection
    • kill ConnOfClient
    • make new ConnOfClient for IP, port, PK
    • init variables and status
  • end

  • get into sleep of ConnOfClient

    • check if ConnOfClient is not CONNECTED
      • return
    • check if lock_count != sleep_count
      • return
    • get Ip, port and PK
    • kill ConnOfClient
    • reset variables and status
  • end

  • wakeup ConnOfClient

    • check if ConnOfClient is not SLEEPING
      • return
    • make new ConnOfClient using saved IP, port, PK
    • init variables and status
  • end

Sometimes, cargo test fails on Mac OSX

failures:

---- toxcore::crypto_core::tests::encrypt_data_symmetric_test stdout ----
thread 'toxcore::crypto_core::tests::encrypt_data_symmetric_test' panicked at 'called `Result::unwrap()` on an `Err` value: ()', libcore/result.rs:1009:5
note: Run with `RUST_BACKTRACE=1` for a backtrace.


failures:
    toxcore::crypto_core::tests::encrypt_data_symmetric_test

test result: FAILED. 527 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out

Ruby

Is possible to using this trought Ruby?

PingResponse.ping_id does not match

When I run dht_node example, I get too many errors:

ERROR 2018-04-05T14:19:47Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("PingResponse.ping_id does not match") }

DHT Request hardening human doc.

DHT Hardening

  • Why we use hardening.

DhtRequest hardening is used for avoiding DoS attack.
A Tox node can be enter into Tox Network if the node can respond with valid PingResponse.
So, inserting many fake nodes can attack Tox Network to prevent two valid tox nodes can not connect to each other.
Hardening is used for defeating this attack.

  • To implement hardening, we introduce new packet type named CRYPTO_PACKET_HARDENING which of value is 48.
  • We should extend existing RequestQueue struct to generic struct which is

From:

pub struct RequestQueue {
    /// Map that stores requests IDs with time when they were generated.
    ping_map: HashMap<(PublicKey, u64), Instant>,
    /// Timeout when requests IDs are considered invalid.
    timeout: Duration,
}

To:

pub struct RequestQueue<T: u64 + HardenPingId> {
    /// Map that stores requests IDs with time when they were generated.
    ping_map: HashMap<(PublicKey, T), Instant>,
    /// Timeout when requests IDs are considered invalid.
    timeout: Duration,
}

Here T may be one of these

u64

Or

struct HardenPingId {
    sendback_node: PackedNode,
    ping_id: u64,
}
  • How working
    • send periodically hardening getnodes_req to random node in close_list for all of nodes in close_list.
    • handle incoming harden getnodes_req by responding with harden getnodes_resp, or handle harden getnodes_resp by checking responded node's PK is in my close_List.

Lan Discovery Human doc.

Description for lan discovery

Every 10 seconds, node sends Lan Discovery packet to it's interfaces broadcast addresses, global broadcast address(255.255.255.255), Ipv6 broadcast address(FF02::1).

When interval timer expires send_lan_discovery() is called.

send_lan_discovery()

- call get_broadcast_addresses()
    - call get_ipv4_broadcast_addrs()
    it uses crate named "get_if_addrs" to get ipv4 broadcast addresses.
after get interfaces ipv4 broadcast addresses, it plus ipv4 global broadcase address(255.255.255.255), ipv6 global broadcast address(FF02::1).

- create LanDiscovery packet

- send packet to boradcast addresses

When node received LanDiscovery packet, it creates NodesRequest packet and send back to sender of LanDiscovery packet.

In NodesRequest packet, the pk of payload is 'Sender's PK of LanDiscovery.

When the node sent LanDiscovery receives NodesRequest packet, it responds with NodesResponse packet. It results in a lan bootstrapping.

Proposal to improve errors definitions

Motivation

Currently we have to write a lot of boilerplate code to define custom errors. It includes Error and ErrorKind definitions, Into trait implementation and tests. This proposal demonstrates some changes that can help us to reduce necessary code for errors definitions.

Detailed design

Into trait

To build one error from another we define Into trait. This means that error kind is always logically bounded to some error type it is constructed from. In turns this leads us to very generic error kinds and inability to distinguish different errors with same type. Let's consider GetPayloadError as an example. We always convert it to the same GetPayload variant which means the only way to see what packet was invalid is to check out backtrace. Another example - std::io::Error which can happen in various cases but we bound it to SendTo variant.

Instead of using Into trait we can build errors in place:

SomeError::from(e)

turns into

e.context(ErrorKind::Variant).into()

into here converts Context<ErrorKind> to Error and can be omitted under some circumstances (? operator, including async functions).

error_kind macros

To reduce Error and ErrorKind definitions we can define a macros:

macro_rules! error_kind {
    ($(#[$error_attr:meta])* $error:ident, $(#[$kind_attr:meta])* $kind:ident { $($variants:tt)* }) => {
        #[derive(Debug)]
        $(#[$error_attr])*
        pub struct $error {
            ctx: Context<$kind>,
        }

        impl $error {
            /// Return the kind of this error.
            pub fn kind(&self) -> &$kind {
                self.ctx.get_context()
            }
        }

        impl Fail for $error {
            fn cause(&self) -> Option<&Fail> {
                self.ctx.cause()
            }

            fn backtrace(&self) -> Option<&Backtrace> {
                self.ctx.backtrace()
            }
        }

        impl fmt::Display for $error {
            fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
                self.ctx.fmt(f)
            }
        }

        #[derive(Clone, Debug, Eq, PartialEq, Fail)]
        $(#[$kind_attr])*
        pub enum $kind {
            $($variants)*
        }

        impl From<$kind> for $error {
            fn from(kind: $kind) -> $error {
                $error::from(Context::new(kind))
            }
        }

        impl From<Context<$kind>> for $error {
            fn from(ctx: Context<$kind>) -> $error {
                $error { ctx }
            }
        }
    }
}

Then we can define errors with:

error_kind! {
    #[doc = "Some doc"]
    SomeError,
    #[doc = "Some doc"]
    SomeErrorKind {
        #[doc = "Some doc"]
        #[fail(display = "Some description")]
        Variant1,
        #[doc = "Some doc"]
        #[fail(display = "Some description")]
        Variant2,
    }
}

Tests

To reason kcov we have to write a lot of test that are not really check something useful. This includes Display and From traits, cause and backtrace methods.

With macros defined in previous section we can write tests only once:

#[cfg(test)]
mod tests {
    use super::*;

    error_kind! {
        TestError,
        TestErrorKind {
            #[fail(display = "Variant1")]
            Variant1,
            #[fail(display = "Variant2")]
            Variant2,
        }
    }

    #[test]
    fn test_error() {
        assert_eq!(format!("{}", TestErrorKind::Variant1), "Variant1".to_owned());
        assert_eq!(format!("{}", TestErrorKind::Variant2), "Variant2".to_owned());
    }

    #[test]
    fn test_error_variant_1() {
        let error = TestError::from(TestErrorKind::Variant1);
        assert!(error.cause().is_none());
        assert!(error.backtrace().is_some());
        assert_eq!(format!("{}", error), "Variant1".to_owned());
    }

    #[test]
    fn test_error_variant_2() {
        let error = TestError::from(TestErrorKind::Variant2);
        assert!(error.cause().is_none());
        assert!(error.backtrace().is_some());
        assert_eq!(format!("{}", error), "Variant2".to_owned());
    }
}

These test will cover all error related code since it's defined by a macro.

Notes

  • We should avoid building errors from other statically known errors, e.g. OneError::from(OneErrorKind::Variant1).context(AnotherErrorKind::Variant2).into(). Instead, we should add Variant1 to AnotherErrorKind.

Alternatives

  • Use alternative patterns as described in this section.
  • Drop failure dependency and implement all error enums ouselves.

Drawbacks

  • We will have to use doc attribute instead of ///.

Multicast vs. Broadcast for Lan Discovery

For implementing Lan Discovery in DHT, I am considering multi-cast or broad-cast for sending Lan Discovery packet.
Multicast is obviously better than broadcast.
But toxcore implementation is not only one.
Our toxcore should co-work with other toxcore implementation.
If is is true, then we have to use broadcast.

Errors during dht server's start-up

Dht server runs for long time.
I got log messages below.
After some times have passed, there is no error.

NamSooui-MacBook-Pro:examples joe$ RUST_LOG=error cargo run --example dht_node
Compiling tox v0.0.4 (file:///Users/joe/work/tox)
Finished dev [unoptimized + debuginfo] target(s) in 5.36 secs
Running /Users/joe/work/tox/target/debug/examples/dht_node
ERROR 2018-05-19T03:55:06Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionRequest2::get_payload() failed: Custom { kind: Other, error: StringError("OnionRequest2 decrypt error.") }") }
ERROR 2018-05-19T03:55:06Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionRequest2::get_payload() failed: Custom { kind: Other, error: StringError("OnionRequest2 decrypt error.") }") }
ERROR 2018-05-19T03:55:08Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("NodesRequest::get_payload() failed: Custom { kind: Other, error: StringError("NodesRequest decrypt error.") }") }
ERROR 2018-05-19T03:55:10Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
ERROR 2018-05-19T03:55:10Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
ERROR 2018-05-19T03:55:11Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
ERROR 2018-05-19T03:55:13Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
lan_wakeup
ERROR 2018-05-19T03:55:24Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
lan_wakeup
ERROR 2018-05-19T03:55:28Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
ERROR 2018-05-19T03:55:30Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
ERROR 2018-05-19T03:55:31Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
lan_wakeup
ERROR 2018-05-19T03:55:39Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
lan_wakeup
ERROR 2018-05-19T03:55:48Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("NodesRequest::get_payload() failed: Custom { kind: Other, error: StringError("NodesRequest decrypt error.") }") }
ERROR 2018-05-19T03:55:48Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("NodesRequest::get_payload() failed: Custom { kind: Other, error: StringError("NodesRequest decrypt error.") }") }
ERROR 2018-05-19T03:55:52Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("NodesRequest::get_payload() failed: Custom { kind: Other, error: StringError("NodesRequest decrypt error.") }") }
lan_wakeup
ERROR 2018-05-19T03:56:00Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
ERROR 2018-05-19T03:56:02Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("NodesRequest::get_payload() failed: Custom { kind: Other, error: StringError("NodesRequest decrypt error.") }") }
ping_wakeup
lan_wakeup
ERROR 2018-05-19T03:56:08Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("NodesRequest::get_payload() failed: Custom { kind: Other, error: StringError("NodesRequest decrypt error.") }") }
ERROR 2018-05-19T03:56:09Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("NodesRequest::get_payload() failed: Custom { kind: Other, error: StringError("NodesRequest decrypt error.") }") }
ERROR 2018-05-19T03:56:09Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
ERROR 2018-05-19T03:56:10Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
lan_wakeup
ERROR 2018-05-19T03:56:17Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
lan_wakeup
ERROR 2018-05-19T03:56:32Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
lan_wakeup
lan_wakeup
ERROR 2018-05-19T03:56:48Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("NodesRequest::get_payload() failed: Custom { kind: Other, error: StringError("NodesRequest decrypt error.") }") }
ERROR 2018-05-19T03:56:48Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
ERROR 2018-05-19T03:56:52Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("NodesRequest::get_payload() failed: Custom { kind: Other, error: StringError("NodesRequest decrypt error.") }") }
lan_wakeup
ERROR 2018-05-19T03:57:02Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("NodesRequest::get_payload() failed: Custom { kind: Other, error: StringError("NodesRequest decrypt error.") }") }
ping_wakeup
lan_wakeup
ERROR 2018-05-19T03:57:08Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("NodesRequest::get_payload() failed: Custom { kind: Other, error: StringError("NodesRequest decrypt error.") }") }
lan_wakeup
ERROR 2018-05-19T03:57:21Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("OnionAnnounceRequest decrypt error.") }
lan_wakeup
lan_wakeup
lan_wakeup
ERROR 2018-05-19T03:57:48Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("NodesRequest::get_payload() failed: Custom { kind: Other, error: StringError("NodesRequest decrypt error.") }") }
ERROR 2018-05-19T03:57:52Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("NodesRequest::get_payload() failed: Custom { kind: Other, error: StringError("NodesRequest decrypt error.") }") }
lan_wakeup
ping_wakeup
lan_wakeup
lan_wakeup
lan_wakeup
lan_wakeup
lan_wakeup
lan_wakeup
ping_wakeup
lan_wakeup
lan_wakeup
lan_wakeup
lan_wakeup
lan_wakeup
lan_wakeup
ping_wakeup
lan_wakeup
lan_wakeup
lan_wakeup
lan_wakeup
lan_wakeup

How to deal with IPv6 and IPv4

How to deal with IPv6

Basic data structures

typedef struct IP {
    Family family;
    union {
        IP4 v4;
        IP6 v6;
    } ip;
} IP;

typedef struct IP_Port {
    IP ip;
    uint16_t port;
} IP_Port;

IP_Port is the container of IP address and port, IP is an union of IPv4 and IPv6.
So an IP address can be IPv4 or IPv6.

typedef struct {
    IP_Port     ip_port;
    uint64_t    timestamp;
    uint64_t    last_pinged;

    Hardening hardening;
    /* Returned by this node. Either our friend or us. */
    IP_Port     ret_ip_port;
    uint64_t    ret_timestamp;
} IPPTsPng;

typedef struct {
    uint8_t     public_key[CRYPTO_PUBLIC_KEY_SIZE];
    IPPTsPng    assoc4;
    IPPTsPng    assoc6;
} Client_data;

nodes in DHT network can have IPv4 address and IPv6 address at the same time.

struct Networking_Core {
    const Logger *log;
    Packet_Handler packethandlers[256];

    Family family;
    uint16_t port;
    /* Our UDP socket. */
    Socket sock;
};

But DHT server can run only in IPv4 or IPv6 mode. Because 'dht->net' can have only one socket.

NodesResponse

When get close nodes list, node's IP is taken by below code.

if (net_family_is_ipv4(sa_family)) {
    ipptp = &client->assoc4;
} else if (net_family_is_ipv6(sa_family)) {
    ipptp = &client->assoc6;
} else if (client->assoc4.timestamp >= client->assoc6.timestamp) {
    ipptp = &client->assoc4;
} else {
    ipptp = &client->assoc6;
}

Here 'sa_family' is 'unspec'.
So the result ip address type is IPv4 or IPv6 depending on it's last received timestamp.

Nodes in NodesResponse packet can be mixed with IPv4 or IPv6 addresses at the same time.

Sending packet

  1. When DHT server runs in IPv4 mode

    If target node's IP is IPv6 then log ERROR and don't send packet. Because nodes in NodesResponse are taken by it's timestamp, DHT server couldn't send packet to IPv6 node, so this is an error case.

  2. When DHT server runs in IPv6 mode
    If target node's IP is IPv4 then convert address to IPv6 format, then send.

encrypt & decrypt between PC

I am testing dht_node_new.rs

First, I try peer to peer (Linux VM to MacOS) communication sending PingRequest to each other.
There is get_payload() decrypt error.

Then, I try to loopback on MacOS sending PingRequest.
There is not get_payload() decrypt error.

The version of libsodium of Mac and Linux is different.

PackedNode type need modifications.

Current PackedNode is


 pub struct PackedNode {
     /// Socket addr of node.
     pub saddr: SocketAddr,
     /// Public Key of the node.
     pub pk: PublicKey,
}

But , I suggest it to


pub struct PackedNode {
        /// Socket addr of node.
    pub saddr: SocketAddr,
        /// Public Key of the node.
    pub pk: PublicKey,
        /// Status of node
    pub status: NodeStatus,
        // last responded time
    pub last_resp_time: Instant,
}

Here, NodeStatus


pub enum NodeStatus {
    /// online
    Good,
    /// maybe offline
    Bad,
}

Because of absence of concept of timeout in current implementation,
We can not replace BAD node with GOOD node.

BAD node means that a node do not receive a response packet for over 162 seconds.
So, it may be a offline node.
To replace BAD node with GOOD node,
distance() in kbucket.rs can be changed.

Current implementation is


impl Distance for PublicKey {
    fn distance(&self,
                &PublicKey(ref pk1): &PublicKey,
                &PublicKey(ref pk2): &PublicKey) -> Ordering {

        trace!(target: "Distance", "Comparing distance between PKs.");
        let &PublicKey(own) = self;
        for i in 0..PUBLICKEYBYTES {
            if pk1[i] != pk2[i] {
                return Ord::cmp(&(own[i] ^ pk1[i]), &(own[i] ^ pk2[i]))
            }
        }
        Ordering::Equal
    }
}

Suggestion is


impl Distance for PublicKey {
    fn distance(&self,
                node1: &mut PackedNode,
                node2: &mut PackedNode,
                bad_node_timeout: Duration) -> Ordering {

        trace!(target: "Distance", "Comparing distance between PKs.");
        if node1.last_resp_time.elapsed() > bad_node_timeout {
            node1.status = NodeStatus::Bad;
        }
        if node2.last_resp_time.elapsed() > bad_node_timeout {
            node2.status = NodeStatus::Bad;
        }

        match node1.status {
            NodeStatus::Good => {
                match node2.status {
                    NodeStatus::Good => { // Good, Good
                        let &PublicKey(own) = self;
                        let PublicKey(pk1) = node1.pk;
                        let PublicKey(pk2) = node2.pk;
                        for i in 0..PUBLICKEYBYTES {
                            if pk1[i] != pk2[i] {
                                return Ord::cmp(&(own[i] ^ pk1[i]), &(own[i] ^ pk2[i]))
                            }
                        }
                        Ordering::Equal
                    },
                    NodeStatus::Bad => { // Good, Bad
                        Ordering::Less // Good is closer
                    },
                }
            },
            NodeStatus::Bad => {
                match node2.status {
                    NodeStatus::Good => { // Bad, Good
                        Ordering::Greater // Bad is farther
                    },
                    NodeStatus::Bad => { // Bad, Bad
                        let &PublicKey(own) = self;
                        let PublicKey(pk1) = node1.pk;
                        let PublicKey(pk2) = node2.pk;
                        for i in 0..PUBLICKEYBYTES {
                            if pk1[i] != pk2[i] {
                                return Ord::cmp(&(own[i] ^ pk1[i]), &(own[i] ^ pk2[i]))
                            }
                        }
                        Ordering::Equal
                    },
                }
            },
        }
    }
}

pass_key_encrypt_test failure

failures:

---- toxencryptsave_tests::encryptsave_tests::pass_key_encrypt_test stdout ----
        thread 'toxencryptsave_tests::encryptsave_tests::pass_key_encrypt_test' panicked at 'assertion failed: plain.as_slice() != &encrypted[EXTRA_LENGTH..]', src/toxencryptsave_tests/encryptsave_tests.rs:55:9
note: Run with `RUST_BACKTRACE=1` for a backtrace.
thread 'toxencryptsave_tests::encryptsave_tests::pass_key_encrypt_test' panicked at '[quickcheck] TEST FAILED (runtime error). Arguments: ([99], PassKey { salt: Salt([132, 13, 130, 209, 13, 255, 203, 90, 204, 216, 210, 247, 83, 102, 51, 37, 10, 34, 187, 45, 145, 233, 140, 1, 108, 64, 79, 250, 220, 209, 170, 155]), key: PrecomputedKey(****) })
Error: "assertion failed: plain.as_slice() != &encrypted[EXTRA_LENGTH..]"', ~/.cargo/registry/src/github.com-1ecc6299db9ec823/quickcheck-0.6.1/src/tester.rs:179:28
---- toxencryptsave_tests::encryptsave_tests::pass_key_encrypt_test stdout ----
	thread 'toxencryptsave_tests::encryptsave_tests::pass_key_encrypt_test' panicked at 'assertion failed: plain.as_slice() != &encrypted[EXTRA_LENGTH..]', src\toxencryptsave_tests\encryptsave_tests.rs:55:9
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
stack backtrace:

...

thread 'toxencryptsave_tests::encryptsave_tests::pass_key_encrypt_test' panicked at '[quickcheck] TEST FAILED (runtime error). Arguments: ([3], PassKey { salt: Salt([175, 143, 134, 222, 34, 70, 120, 248, 245, 196, 223, 12, 231, 146, 181, 218, 88, 89, 254, 117, 158, 45, 94, 247, 57, 119, 198, 32, 68, 223, 86, 60]), key: PrecomputedKey(****) })
Error: "assertion failed: plain.as_slice() != &encrypted[EXTRA_LENGTH..]"', C:\Users\appveyor\.cargo\registry\src\github.com-1ecc6299db9ec823\quickcheck-0.6.2\src\tester.rs:179:28
note: Some details are omitted, run with `RUST_BACKTRACE=full` for a verbose backtrace.
stack backtrace:
   0: std::sys::windows::backtrace::unwind_backtrace

ssh

Is possible create a normal ssh connection using tox?
for example i have blocking communications on my firm
but i can using tox.

client/client communications and ssh

Return with custom ErrorKind at nom 5.0

Nom 5 don't have custom ErrorKind now.
But I want to return with custom error kind in this situation.

let request = if let Some(request) = request {
request
} else {
return Err(Err::Error((input, ErrorKind::NoneOf)));
};
let capabilities = if let Some(capabilities) = capabilities {
capabilities
} else {
return Err(Err::Error((input, ErrorKind::NoneOf)));
};

It is needed when the Msi packet don't have a Request subpacket or Capability subpacket,
Now we return with ErrorKind::NoneOf, but it would be better to return with exact explanation why the error occurred.

The error kinds I want to use are

const NOM_CUSTOM_ERR_REQUEST_SUBPACKET_OMITTED: u32 = 1;
const NOM_CUSTOM_ERR_CAPABILITIES_SUBPACKET_OMITTED: u32 = 2;

Any idea?

future plans (full feature persistent chats, without bots)

Just a question.

in mainline toxcore, at least 2-3 PRs with persistent chats were rejected or abandoned

work on persistent chats has been going on for many years by different developers and all they could do is TokTok/c-toxcore#1069 (review)
It's minimal persistence (across disconnect, not across client restart). Still will not work without the group bot.

do you have plans to go further c-toxcore and implement full feature persistent chats, change reference implementation docs and leave c-toxcore in the past (marked as deprecated)?

DHTPK human documentation.

Description of DHT PK packets

Onion client works using onion, tcp_relay and dht to manage exchanging node's PK and connections to friends.

Tox client calls onion client's main loop, this loop do

  • check if onion is connected to tox networks
    • last time of received packet is past 75 seconds => onion is not connected.
    • there is no path entry in path list => onion is not connected.
  • if onion is not connected to Tox networks then
    • populate path list from tcp_relay module
  • if onion is connected to tox networks then
    • do things below for every friend
      • check if friend is online
      • if friend is online then
        • skip this friend
      • if friend is not online then
        • send AnnounceRequest to all of friend's client list
          • if some node of friend's client list is timed out then
            • send AnnounceRequest to random node of path list
      • there are two timer interval, dhtpk for dht and dhtpk for onion
      • if dhtpk interval timer for dht expires then
        • create ONION_DATA_DHTPK payload
        • get node list from close list of freind which is in dht
        • create CRYPTO_PACKET_DHTPK packet
        • send packet to all node
      • if dhtpk interval timer for onion expires then
        • create ONION_DATA_DHTPK payload
        • get destination nodes from friend's client list
        • for each destination node do
          • create OnionDataRequest packet
          • if destination address is UDP then
            • send packet using UDP
          • if destination address is TCP then
            • send packet using TCP client's secure connection
  • if 3 seconds have passed since booting of onion then
    • get random path from onion's path list
      • create OnionAnnounce packet using onion's PK
      • send packet using TCP or UDP
    • get random path from onion friend's path list
      • create OnionAnnounce packet using friend's PK
      • send packet using TCP or UDP

When dhtpk packet is received then

  • decrypt
  • if PK differs from current PK then
    • delete DHT friend
    • create DHT friend with new PK
    • make a new friend connection
    • set onion friend's dht PK
  • unpack received packet then for each nodes
    • if node is UDP then
      • send NodesRequest
    • if node is TCP then
      • add node to tcp relay list and friend's tcp relay list

When AnnounceResponse packet is received then

  • decrypt packet
  • init path's timeout variables
  • copy nodes to path_nodes list
  • add nodes to list to be announced or friend's client list
  • init nodes added
  • for each node of payload do
    • if node's IP is within LAN then skip
    • if destinaiton node is timed out then skip
    • if node is already in list to be announced or friend's client list then skip
    • send AnnounceRequest to node

When OnionDataResponse packet is received from onion then

  • decrypt packet
  • process packet using handler for dhtpk announce

When packet is received from TCP client then

  • if packet is AnnounceResponse then
    • process packet using handler for AnnounceResponse
  • if packet is OnionDataResponse then
    • process packet using handler for OnionDataResponse

Onion human documentation

Onion module allows nodes to announce their long term public keys and find friends by their long term public keys.
There are two basic onion requests - AnnounceRequest and OnionDataRequest. They are enclosed to OnionRequest packets and sent though the onion path to prevent nodes finding out long term public key when they know only temporary DHT public key. There are three types of OnionRequest packets: OnionRequest0, OnionRequest1 and OnionRequest2. AnnounceRequest and OnionDataRequest when created are enclosed to OnionRequest2, OnionRequest2 is enclosed to OnionRequest1 and OnionRequest1 is enclosed to OnionRequest0. When DHT node receives OnionRequest packet it decrypts inner packet and sends it to the next node.

+--------+                       +--------+                       +--------+                       +--------+   +------------------+   +------------+
|        |   +---------------+   |        |   +---------------+   |        |   +---------------+   |        |   | AnnounceRequest  |   |            |
| Sender |---| OnionRequest0 |-->| Node 1 |---| OnionRequest1 |-->| Node 2 |---| OnionRequest2 |-->| Node 3 |---+------------------+-->| Onion node |
|        |   +---------------+   |        |   +---------------+   |        |   +---------------+   |        |   | OnionDataRequest |   |            |
+--------+                       +--------+                       +--------+                       +--------+   +------------------+   +------------+

Similarly to requests there are responses AnnounceResponse and OnionDataResponse that enclosed to three kind of OnionRespose packets: OnionResponse3, OnionResponse2 and OnionResponse1. OnionResponse packets are processed in the same way but with reverse ordering.

+------------+                        +--------+                        +--------+                        +--------+   +-------------------+   +----------+
|            |   +----------------+   |        |   +----------------+   |        |   +----------------+   |        |   | AnnounceResponse  |   |          |
| Onion node |---| OnionResponse3 |-->| Node 3 |---| OnionResponse2 |-->| Node 2 |---| OnionResponse1 |-->| Node 1 |---+-------------------+-->| Receiver |
|            |   +----------------+   |        |   +----------------+   |        |   +----------------+   |        |   | OnionDataResponse |   |          |
+------------+                        +--------+                        +--------+                        +--------+   +-------------------+   +----------+

When onion node handles AnnounceRequest packet it sends answer to original sender using the same onion path with the help of received onion return addresses. But when it handles OnionDataRequest packet it should send response packet to another destination node by its long term public key. That means that when onion node should store long term public keys of announced node along with onion return addresses.

OnionRequest0

  • Decrypt payload using own onion sk, temporary_pk and nonce
  • Take IpPort of next node from decrypted payload
  • Create OnionRequest1 packet using next encrypted payload
  • Encrypt IpPort of sender using OnionReturn format and append it to OnionRequest1
  • Send OnionRequest1 to the next node

OnionRequest1

  • Decrypt payload using own onion sk, temporary_pk and nonce
  • Take IpPort of next node from decrypted payload
  • Create OnionRequest2 packet using next encrypted payload
  • Encrypt IpPort of sender and previous OnionReturn using OnionReturn format and append it to OnionRequest2
  • Send OnionRequest2 to the next node

OnionRequest2

  • Decrypt payload using own onion sk, temporary_pk and nonce
  • Take IpPort of next node from decrypted payload
  • Create AnnounceRequest or OnionDataRequest packet depending on what decprypted payload contains
  • Encrypt IpPort of sender and previous OnionReturn using OnionReturn format and append it to sending packet
  • Send packet to the next node

AnnounceRequest

It's used for announcing ourselves to onion node and for looking for other announced nodes.
If we want to announce ourselves we should send one AnnounceRequest packet with PingId set to 0 to acquire correct PingId of onion node. Then using this PingId we can send another AnnounceRequest to be added to onion nodes list. If AnnounceRequest succeed we will get AnnounceResponse with is_stored set to 2. Otherwise is_stored will be set to 0.
If we are looking for another node we should send AnnounceRequest packet with PingId set to 0 and with PublicKey of this node. If node is found we will get AnnounceResponse with is_stored set to 1. Otherwise is_stored will be set to 0.

  • Decrypt payload using own sk, real_or_temporary_pk and nonce
  • If PingId from decrypted payload is valid:
    • Try to add node to onion nodes list.
    • If succeed set is_stored variable to 2 or to 0 otherwise.
    • Set PingId to the next variable that stores PingId or PublicKey
  • If PingId from decrypted payload is invalid:
    • Try to find node in onion nodes list using PublicKey for search
    • If found:
      • Set is_stored variable to 1
      • Set PublicKey for data encryption of found node to the next variable that stores PingId or PublicKey
    • If not found:
      • Set is_stored variable to 0
      • Set PingId to the next variable that stores PingId or PublicKey
  • Add up to 4 closest to PublicKey we are searching for DHT nodes from KBucket
  • Create AnnounceResponse packet using real_or_temporary_pk

PingId is a sha256 hash of random secret bytes, current time divided by a 20 second timeout, PublicKey and IpPort of sender.

AnnounceResponse

It's used to respond to AnnounceRequest packet. is_stored variable contains the result of sent request. It might have values:

  • 0: failed to announce ourselves of find requested node
  • 1: requested node is found by it's real PublicKey
  • 2: we successfully announces ourselves

OnionDataRequest

It's used to send data requests to dht node using onion paths. When DHT node receives OnionDataRequest it sends OnionDataResponse to destination node for which data request is intended. Thus, data request will go through 7 intermediate nodes until destination node gets it.

  • Find node by destination_pk in onion nodes list.
  • Drop packit if it's not found.
  • Create OnionDataResponse packet using received OnionDataRequest packet and OnionReturn of found node.
  • Send it to node through which destination node announced itself.

OnionDataResponse

When onion node receives OnionDataRequest packet it converts it to OnionDataResponse and sends to destination node if it announced itself and is contained in onion nodes list.

OnionResponse3

  • Decrypt onion return 3 using own onion symmetric key getting onion return 2 and address of next node
  • Create OnionResponse2 packet using decrypted onion return 2 and payload
  • Send OnionResponse2 packet to decrypted address

OnionResponse2

  • Decrypt onion return 2 using own onion symmetric key getting onion return 1 and address of next node
  • Create OnionResponse1 packet using decrypted onion return 1 and payload
  • Send OnionResponse1 packet to decrypted address

OnionResponse1

  • Decrypt onion return 1 using own onion symmetric key getting address of destination node
  • Create AnnounceResponse or OnionDataResponse packet using payload
  • Send AnnounceResponse or OnionDataResponse packet to decrypted address

Notes

  • Onion symmectic key used for onion return encryption should be updated every 2 hours to force onion paths expiration.

NodesResponse.ping_id does not match

When I run dht_node example, I got too many errors:

ERROR 2018-04-05T14:11:58Z: dht_node: failed to handle packet: Custom { kind: Other, error: StringError("NodesResponse.ping_id does not match") }

You should investigate it.

ssh or vpn

Is possible using ssh trought tox?

or creating new interface like as iodine

Doc comments are interpreted as markdown code blocks.

Cargo doc currently emits tons of warnings when run on this codebase.

cargo clean --doc && cargo doc
...
warning: could not parse code block as Rust code
   --> src/toxencryptsave/mod.rs:189:5
    |
189 | /     Decrypts provided `data` with `self` `PassKey`.
190 | |
191 | |     Decrypted data is smaller by [`EXTRA_LENGTH`](./constant.EXTRA_LENGTH.html)
192 | |     than encrypted data.
...   |
212 | |     assert_eq!(passkey.decrypt(&[]), Err(DecryptionError::Null));
213 | |     ```
    | |_______^
    |
    = note: error from rustc: unknown start of token: `

A bit of background. Markdown supports two modes for code blocks. The more common mode uses tick marks like so:

```
// code block..
```

but there is another mode that uses indentation:

    // code block..

The issue likely has to do with rustdoc not stripping whitespace from multiline doc comments in its unindent-comments pass.

Need to fork the reference documentation again

Downstream has heavily modified the reference specification since zetok ghosted. I'd like to incorporate some of these changes but I don't think he'll respond to the pull requests. Can we get a clone of the reference specification as a repository under tox-rs?

Move to async/await

  • migrate to futures 0.3 and tokio 0.2
  • replace all fn -> impl Future with async fn

Crypto handshake human doc

Crypto connection can have one of the next statuses:

  • CookieRequesting
  • HandshakeSending
  • NotConfirmed
  • Established

Usually the initial status is CookieRequesting. But it also can be NotConfirmed in case if we received a CryptoHandshake packet before the connection was initialized.

CookieRequesting

In this status we send up to 8 CookieRequest packets every second and wait for a CookieResponse packet.

Cookie is an encrypted with private symmetric key pair of keys: DHT public key and long term public key. Also it contains a timestamp so that it can be valid only for 8 seconds after creation.

When we receive a CookieResponse packet we check that the ID of this packet is equal to the ID from the request and if so we change the status to HandshakeSending.

If we received a CryptoHandshake packet earlier than a CookieResponse packet we take the Cookie from received CryptoHandshake packet and change the status to HandshakeSending as well.

If we didn't get neither CookieResponse nor CryptoHandshake packets we consider the connection as timed out.

HandshakeSending

In this status we send up to 8 CryptoHandshake packets every second and wait for CryptoHandshake packet from the other side.

When we receive a CryptoHandshake packet we check the Cookie from this packet. First of all it should have the same hash as enclosed in encrypted payload. With this check we ensure that the Cookie wasn't changed or modified which is possible because it lies outside of encrypted payload. Then if we can decrypt this Cookie with our private symmetric key we check its content. The Cookie shouldn't be timed out and both DHT and long term public keys should match connection keys. If only long term key is correct but DHT key is different we assume that DHT key was changed and send the appropriate message.

If after all the Cookie is considered correct we change the status to NotConfirmed.

NotConfirmed

In this status we continue sending CryptoHandshake packets because we don't know if the other side received our CryptoHandshake packet.

When we receive the first valid CryptoData packet we change the status to Established.

Established

In this status the connection considered as fully established.

DhtPacket Human documentation

Description of DHT packets

1. Overview

PingRequest and NodesRequest provide method to interact with Tox Networks.

  • Purpose of packet

    • PingRequest : check if the node is alive
    • NodesRequest : get nodes closest to me and my friends.
    • NatPingRequest : check if my friend is behind NAT, then punch hole
  • Data types used

    • Kbucket : contains nodes closest to me
    • Bucket : contains nodes closest to my friend, nodes to bootstrap

2. PingRequest

  • When DHT node receives PingRequest, it responds with PingResponse and sends PingRequest.

  • When DHT node receives NodesRequest, it responds with NodesResponse and sends PingRequest.

  • Before DHT node sends PingRequest, it checks if the node is addable to close list

    • addable : send PingRequest
    • not addable: don't send PingRequest

3. PingResponse

  • When DHT node receives PingResponse, it checks if the node is addable to close list
    • addable : add node to close list and friend's close list
    • not addable : don't add node to close list

4. NodesRequest

There are two timers interval

  • 60 seconds interval : to ping nodes in close list and friend's close list
  • 20 seconds interval : to get nodes close to me and my friends
    • 20 seconds interval timer expires only 5 times after start-up.

There are two case sending NodesRequest

  • on two timers expire
  • bootstrapping

BootStrapping is checking nodes whether it is alive or not.

The nodes to bootstrap are nodes in NodesResponse and if

  • the node is addable to close list of my Public Key or
  • the node is addable to close list of my friend's Public Key

When DHT node receives NodesRequest, it responds with NodesResponse.

NodesResponse contains maximum 4 nodes in close list and friend's close list.

Maximum 4 nodes are closest to requested Public Key.

5. NodesResponse

When DHT node receives NodesResponse, it unpack nodes in packet's payload.

Unpacked nodes are added to

  • if addable to close list, add
  • if addable to close list of my friend, add
  • if addable to bootstrap list, add
  • if addable to bootstrap list of my friend, add

6. Replacing order of Kbucket and Bucket

Nodes in close list have two status

  • Good : node is alive and 'alive' is meaning that last responding time is within 182 seconds.
  • Bad : node is not alive, which means last responding time exceeds 182 seconds.

The old node in close list is replaced by new node order by

  • Bad and farthest
  • Good and farther than new node
  • if there is no bad or no (good and father than new node), then new node is not addable

7. NatPingRequest

If a friend is not connected with me and friend is online, at every 3 seconds

  • send NatPingRequest to all entry of close_nodes of the friend

Close nodes of a friend have SocketAddrs, if

  • more than half of ip addresses are same and
  • last time of receiving NatPingResponse is within 6 seconds and
  • do_hole_punching variable == 1

are true then starts hole punching.

When DHT node receives NatPingRequest, it checks if Receiver PK is same with my PK

  • if same PK, find Sender's PK in my friend's list
    • if found, responds with NatPingResponse
    • if not found, do nothing
  • if not same, search DHT node's close list to find same PK
    • if found, send NatPingRequest to the node
    • if not found, do nothing

8. NatPingResponse

When DHT node receives NatPingResponse, it checks if Receiver PK is same with my PK

  • if same PK, find Sender's PK in my friend's list
    • if found, check if ping_id is same with ping_id saved
      • if same, set do_hole_punching variable to 1
      • if not same, do nothing
    • if not found, do nothing
  • if not same, search DHT node's close list to find same PK
    • if found, send NatPingResponse to the node
    • if not found, do nothing

[Question] Can I send friend request with a message?

I looked up the documentation, the add_friend accepts only one param which is the public key of the friend, but the C binding lib can add friend with a message, so I wonder if there is a way to achieve this.

I noticed that there is a FriendRequests struct, but it seems never used in the code.

Thanks for the help!

IPv6 in NodesResponse

IPv6 in NodesResponse

  • Nodes in Tox Network can be a type of IPv4 only, IPv6 only, Dual stack(IPv4 and IPv6).
-----------------
| IPv6 addr.     | Kbucket
|                |
-----------------
IPv6 only node(IPv6 socket)
        |
        | NodesResponse
        v
-----------------
| IPv6 addr.     | Kbucket
| IPv4 addr.     |
-----------------
Dual stack node(IPv6 Socket)
        |
        | NodesResponse
        v
-----------------
| IPv6 addr.     | Kbucket
| IPv4 addr.     |
-----------------
IPv4 only node(IPv4 Socket)
  • To prevent error like this

{ kind: Other, error: StringError("DHT node is running in ipv4 mode but target node's socket is ipv6 address") }

We send *Request packet to both IPv4 and IPv6 addresses if it exists and if Dht server is running in IPv6 mode.

Major OSs like Linux, OSX, Windows, Anroid, IOS are running in Dual stack mode if it uses IPv6.

If Dht server is running in IPv4 mode, it sends packets to only IPv4 addresses.

Failure is deprecated

Notice: failure is deprecated. If you liked failure's API, consider using:

  • Anyhow is a good replacement for failure::Error.
  • thiserror is a good, near drop-in replacement for #[derive(Fail)].

License confusion

You state that everyone is free to use this code under the terms of either MIT or GPLv3, but that means GPLv3 doesn't have any power here at all, so what's the point of GPLv3 being here?

Decrypting DhtRequest failed

     Running `target/debug/examples/dht_node`
DEBUG 2018-04-03T02:26:16Z: tox::toxcore::dht::server: Created new Server instance
DEBUG 2018-04-03T02:26:16Z: tox::toxcore::dht::packed_node: Creating new PackedNode.
DEBUG 2018-04-03T02:26:16Z: tox::toxcore::dht::kbucket: Trying to add PackedNode.
DEBUG 2018-04-03T02:26:16Z: tox::toxcore::dht::kbucket: Calculating KBucketIndex for PKs.
DEBUG 2018-04-03T02:26:16Z: tox::toxcore::dht::kbucket: Trying to add PackedNode.
DEBUG 2018-04-03T02:26:16Z: tox::toxcore::dht::kbucket: Node inserted at the end of the bucket.
 INFO 2018-04-03T02:26:16Z: dht_node: server running on localhost:12345
DEBUG 2018-04-03T02:26:16Z: tokio_reactor::background: starting background reactor
DEBUG 2018-04-03T02:26:16Z: tokio_reactor: loop process - 1 events, 0.000s
nat_wakeup
DEBUG 2018-04-03T02:26:19Z: tox::toxcore::dht::packed_node: Creating new PackedNode.
DEBUG 2018-04-03T02:26:19Z: dht_node: Send DhtRequest(DhtRequest { rpk: PublicKey([15, 107, 126, 130, 81, 55, 154, 157, 192, 117, 0, 225, 119, 43, 48, 117, 84, 109, 112, 57, 243, 216, 4, 171, 185, 111, 33, 146, 221, 31, 77, 118]), spk: PublicKey([237, 29, 182, 144, 124, 90, 170, 244, 229, 25, 178, 245, 108, 138, 170, 145, 39, 120, 138, 42, 7, 63, 218, 33, 37, 185, 40, 70, 57, 100, 246, 91]), nonce: Nonce([31, 208, 22, 149, 94, 3, 163, 17, 128, 144, 127, 178, 225, 80, 60, 46, 9, 21, 119, 179, 125, 140, 99, 151]), payload: [136, 155, 209, 180, 51, 142, 8, 167, 198, 179, 231, 100, 133, 72, 51, 234, 53, 191, 166, 102, 226, 8, 185, 166, 208, 124] }) => V4(127.0.0.1:33445)
send = DhtRequest(DhtRequest { rpk: PublicKey([15, 107, 126, 130, 81, 55, 154, 157, 192, 117, 0, 225, 119, 43, 48, 117, 84, 109, 112, 57, 243, 216, 4, 171, 185, 111, 33, 146, 221, 31, 77, 118]), spk: PublicKey([237, 29, 182, 144, 124, 90, 170, 244, 229, 25, 178, 245, 108, 138, 170, 145, 39, 120, 138, 42, 7, 63, 218, 33, 37, 185, 40, 70, 57, 100, 246, 91]), nonce: Nonce([31, 208, 22, 149, 94, 3, 163, 17, 128, 144, 127, 178, 225, 80, 60, 46, 9, 21, 119, 179, 125, 140, 99, 151]), payload: [136, 155, 209, 180, 51, 142, 8, 167, 198, 179, 231, 100, 133, 72, 51, 234, 53, 191, 166, 102, 226, 8, 185, 166, 208, 124] }) V4(127.0.0.1:33445)
DEBUG 2018-04-03T02:26:19Z: tokio_reactor: loop process - 1 events, 0.000s
DEBUG 2018-04-03T02:26:19Z: tox::toxcore::dht::packet: Getting packet data from DhtRequest.
DEBUG 2018-04-03T02:26:19Z: tox::toxcore::dht::packet: Decrypting DhtRequest failed!

BTW, why do we send it to ourselves (V4(127.0.0.1:33445))?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.