Coder Social home page Coder Social logo

microsoft / ntttcp-for-linux Goto Github PK

View Code? Open in Web Editor NEW
335.0 27.0 84.0 390 KB

A Linux network throughput multiple-thread benchmark tool.

Home Page: https://github.com/Microsoft/ntttcp-for-linux

License: MIT License

Makefile 0.79% C 87.92% CMake 0.26% Python 11.03%

ntttcp-for-linux's Introduction

NTTTCP-for-Linux

Summary

A multiple-threaded Linux network throughput benchmark tool.

Features

  • Multiple threads to send/receive data ('-P', '-n', and '-l'). By default, Receiver ('-r') uses 16 threads and Sender ('-s') uses 64 threads to exchange data.

  • Support cpu affinity ('-m').

  • Support running in background (daemon, '-D').

  • Support Sender and Receiver sync mode by default. Use "-N" (no_sync) to disable the sync.

  • Support testing with multiple clients mode (use '-M' on Receiver, and '-L' on the last Sender).

  • Support select() by default, and epoll() (use '-e' on Receiver).

  • Support both TCP (by default), and UDP ('-u') tests.

  • Support pin TCP server or client port (use '-p' on Receiver or '-f' on Sender).

  • Support test Warmup ('-W') and Cooldown ('-C').

  • Support reporting TCP retransmit ('--show-tcp-retrans').

  • Support reporting number of packets ('--show-nic-packets') and number of interrupts ('--show-dev-interrupts').

  • Support bandwidth limit ('-B' or '--fq-rate-limit').

  • Support capturing console log to file ('-O').

  • Support writing log into XML file ('-x').

  • Support writing log into JSON file ('-j').

Getting Started

Building NTTTCP-for-Linux

  • Using make:
	make; make install
  • Using CMake:
	cd src
	mkdir build && cd build
	cmake ..
	make && make install

Usage

ntttcp -h

Known issues

Example run

To measure the network performance between two multi-core serves running SLES 12, NODE1 (192.168.4.1) and NODE2 (192.168.4.2), connected via a 40 GigE connection.

On NODE1 (the receiver), run:

./ntttcp -r

(Translation: Run ntttcp as a receiver with default setting. The default setting includes: with 16 threads created and run across all CPUs, allocating 64K receiver buffer, and run for 60 seconds.)

And on NODE2 (the sender), run:

./ntttcp -s192.168.4.1

(Translation: Run ntttcp as a sender, with default setting. The default setting includes: with 64 threads created and run across all CPUs, allocating 128KB sender buffer, and run for 60 seconds.)

Using the above parameters, the program returns results on both the sender and receiver nodes, correlating network communication to CPU utilization.

Example receiver-side output from a given run (which showcases 37.66 Gbps throughput):

NODE1:/home/simonxiaoss/ntttcp-for-linux/src # ./ntttcp -s 10.0.0.1 -W 1 -t 10 -C 1
NTTTCP for Linux 1.3.4
---------------------------------------------------------
22:36:30 INFO: Network activity progressing...
22:36:30 INFO: 64 threads created
22:36:31 INFO: Test warmup completed.
22:36:42 INFO: Test run completed.
22:36:42 INFO: 64 connections tested
22:36:42 INFO: #####  Totals:  #####
22:36:42 INFO: test duration    :10.36 seconds
22:36:42 INFO: total bytes      :30629953536
22:36:42 INFO:   throughput     :23.65Gbps
22:36:42 INFO: cpu cores        :20
22:36:42 INFO:   cpu speed      :2394.455MHz
22:36:42 INFO:   user           :3.60%
22:36:42 INFO:   system         :3.44%
22:36:42 INFO:   idle           :91.57%
22:36:42 INFO:   iowait         :0.00%
22:36:42 INFO:   softirq        :1.38%
22:36:42 INFO:   cycles/byte    :1.37
22:36:42 INFO: cpu busy (all)   :159.81%
---------------------------------------------------------
22:40:52 INFO: Test cooldown is in progress...
22:40:52 INFO: Test cycle finished.

Related topics

  1. Windows ntttcp.exe

  2. Use ntttcp to test network throughput

  3. Linux Integration Services Automation, LISA

Terms of Use

By downloading and running this project, you agree to the license terms of the third party application software, Microsoft products, and components to be installed.

The third party software and products are provided to you by third parties. You are responsible for reading and accepting the relevant license terms for all software that will be installed. Microsoft grants you no rights to third party software.

License

The MIT License (MIT)

Copyright (c) 2015 Microsoft Corporation

Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

ntttcp-for-linux's People

Contributors

alan-jowett avatar csujedihy avatar kuangtu avatar leemichmsft avatar ligurio avatar lizzha avatar lpereira avatar lubaihua33 avatar microsoft-github-policy-service[bot] avatar mihaico avatar msftgits avatar mtfriesen avatar nunodasneves avatar pedroperezmsft avatar santoshx avatar sharsonia avatar shemminger avatar simonxiaoss avatar sunnyguoqq avatar thanasisk avatar weltling avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

ntttcp-for-linux's Issues

Avoid using atomic primitives in the hot path

For instance, in tcpstream.c and udpstream.c, __sync_fetch_and_add() is used to update the number of bytes transferred. This can cause contention between cores if they share cache lines (i.e. false sharing). Ideally, each thread would update their own counter without any kind of locking, and these numbers would be tallied up after joining the worker threads.

Server segmentation fault

Running this command:
ntttcp -s -m 2,0,192.168.0.2 -p 50000 -t 60 -R -x

The test completes, but ntttcp does not exit cleanly. The last message printed is
"Segmentation fault (core dumped)"

I did some investigation and it looks like it is the result of a computation change for total_threads. I made a small change and it seems to have fixed the issue.

I'm including a patch for the change I made since I'm new to git and this seems faster. If I can figure out how to fork, patch, commit, push, and make a pull request, (may take me a bit), I'll do that too.

Test succeeds but lot many "INFO: failed to connect to receiver" messages printed

02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5114 on socket[187]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5115 on socket[189]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5115 on socket[189]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5116 on socket[189]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5116 on socket[189]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5116 on socket[191]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5117 on socket[192]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5117 on socket[192]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5118 on socket[192]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5118 on socket[192]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5118 on socket[192]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5119 on socket[195]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5119 on socket[196]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5120 on socket[197]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5120 on socket[197]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5121 on socket[199]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5121 on socket[199]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5122 on socket[201]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5122 on socket[202]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5123 on socket[203]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5123 on socket[203]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5124 on socket[205]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5124 on socket[205]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5125 on socket[207]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5125 on socket[207]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5126 on socket[209]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5126 on socket[209]. return = -1, errno = 111
02:13:18 INFO: failed to connect to receiver: 192.168.0.200:5127 on socket[211]. return = -1, errno = 111

*** Error in `ntttcp': realloc(): invalid pointer: 0x00007ffd842ffb80 ***

test script running between mellanox X5 and X6-DX on a supermicro Xeon Gold SP dual socket and intel R1800 Xeon Gold SP 2nd Gen dual socket.

recently upgraded to HEAD as of 1/21/2021. Don't recall seeing this issue before (last clone was likely >= 2 weeks ago)--but this script has changed over time.

X5 is running as Gen3x8, X6DX as Gen3x16.

Basically just a stress test of link bounce & link saturation testing out some new physical plant between the two cards.

compiled with:

CFLAGS="-march=native -flto -Wl,-flto -O2" AR="gcc-ar" NM="gcc-nm" RANLIB="gcc-ranlib"

using devtoolset-9 on RHEL 7.7

WARMUP=2
COOLDOWN=2
DURATION=60
ARGS="-t $DURATION -W $WARMUP -C $COOLDOWN"
while true; do
ifconfig enp24s0f0 down
sleep 1
ifconfig enp24s0f0 up
sleep 1
ifconfig enp24s0f0 | grep UP,BROADCAST,RUNNING,MULTICAST || exit 1
TIMESTAMP=$(date +%Y-%m-%dT%H:%M:%S%z)
ntttcp $ARGS -O log.$TIMESTAMP &
ssh @ ntttcp -s 192.168.100.1 $ARGS -O ntttcp.enp24s0f0.$TIMESTAMP.log -x ntttcp.enp24s0f0.$TIMESTAMP.xml
wait
TIMESTAMP=$(date +%Y-%m-%dT%H:%M:%S%z)
ntttcp $ARGS -O log.$TIMESTAMP &
ssh @ ntttcp -s 192.168.101.1 $ARGS -O ntttcp.enp24s0f0.$TIMESTAMP.log -x ntttcp.enp24s0f0.$TIMESTAMP.xml
wait
ifconfig enp24s0f1 down
sleep 1
ifconfig enp24s0f1 up
sleep 1
ifconfig enp24s0f1 | grep UP,BROADCAST,RUNNING,MULTICAST || exit 1
done


======= Backtrace: =========
/lib64/libc.so.6(+0x7f3e4)[0x7fa76f86e3e4]
/lib64/libc.so.6(realloc+0x389)[0x7fa76f874f39]
/lib64/libc.so.6(getdelim+0x10b)[0x7fa76f85e96b]
ntttcp[0x406d9e]
ntttcp[0x409e40]
ntttcp[0x4039f7]
/lib64/libc.so.6(__libc_start_main+0xf5)[0x7fa76f811555]
ntttcp[0x4043bd]
======= Memory map: ========
00400000-00402000 r--p 00000000 fd:00 68462831 /usr/local/bin/ntttcp
00402000-0040b000 r-xp 00002000 fd:00 68462831 /usr/local/bin/ntttcp
0040b000-00410000 r--p 0000b000 fd:00 68462831 /usr/local/bin/ntttcp
00410000-00411000 r--p 0000f000 fd:00 68462831 /usr/local/bin/ntttcp
00411000-00412000 rw-p 00010000 fd:00 68462831 /usr/local/bin/ntttcp
0105d000-0107e000 rw-p 00000000 00:00 0 [heap]
7fa668000000-7fa668021000 rw-p 00000000 00:00 0
7fa668021000-7fa66c000000 ---p 00000000 00:00 0
7fa670000000-7fa670021000 rw-p 00000000 00:00 0
7fa670021000-7fa674000000 ---p 00000000 00:00 0
7fa674000000-7fa674021000 rw-p 00000000 00:00 0
7fa674021000-7fa678000000 ---p 00000000 00:00 0
7fa678000000-7fa678021000 rw-p 00000000 00:00 0
7fa678021000-7fa67c000000 ---p 00000000 00:00 0
7fa67c000000-7fa67c021000 rw-p 00000000 00:00 0
7fa67c021000-7fa680000000 ---p 00000000 00:00 0
7fa680000000-7fa680021000 rw-p 00000000 00:00 0
7fa680021000-7fa684000000 ---p 00000000 00:00 0
7fa684000000-7fa684021000 rw-p 00000000 00:00 0
7fa684021000-7fa688000000 ---p 00000000 00:00 0
7fa688000000-7fa688021000 rw-p 00000000 00:00 0
7fa688021000-7fa68c000000 ---p 00000000 00:00 0
7fa68c000000-7fa68c021000 rw-p 00000000 00:00 0
7fa68c021000-7fa690000000 ---p 00000000 00:00 0
7fa690000000-7fa690021000 rw-p 00000000 00:00 0
7fa690021000-7fa694000000 ---p 00000000 00:00 0
7fa694000000-7fa694021000 rw-p 00000000 00:00 0
7fa694021000-7fa698000000 ---p 00000000 00:00 0
7fa698000000-7fa698021000 rw-p 00000000 00:00 0
7fa698021000-7fa69c000000 ---p 00000000 00:00 0
7fa69c000000-7fa69c021000 rw-p 00000000 00:00 0
7fa69c021000-7fa6a0000000 ---p 00000000 00:00 0
7fa6a0000000-7fa6a0021000 rw-p 00000000 00:00 0
7fa6a0021000-7fa6a4000000 ---p 00000000 00:00 0
7fa6a4000000-7fa6a4021000 rw-p 00000000 00:00 0
7fa6a4021000-7fa6a8000000 ---p 00000000 00:00 0
7fa6a8000000-7fa6a8021000 rw-p 00000000 00:00 0
7fa6a8021000-7fa6ac000000 ---p 00000000 00:00 0
7fa6ac000000-7fa6ac021000 rw-p 00000000 00:00 0
7fa6ac021000-7fa6b0000000 ---p 00000000 00:00 0
7fa6b0000000-7fa6b0021000 rw-p 00000000 00:00 0
7fa6b0021000-7fa6b4000000 ---p 00000000 00:00 0
7fa6b4000000-7fa6b4021000 rw-p 00000000 00:00 0
7fa6b4021000-7fa6b8000000 ---p 00000000 00:00 0
7fa6b8000000-7fa6b8021000 rw-p 00000000 00:00 0
7fa6b8021000-7fa6bc000000 ---p 00000000 00:00 0
7fa6bc000000-7fa6bc021000 rw-p 00000000 00:00 0
7fa6bc021000-7fa6c0000000 ---p 00000000 00:00 0
7fa6c0000000-7fa6c0021000 rw-p 00000000 00:00 0
7fa6c0021000-7fa6c4000000 ---p 00000000 00:00 0
7fa6c4000000-7fa6c4021000 rw-p 00000000 00:00 0
7fa6c4021000-7fa6c8000000 ---p 00000000 00:00 0
7fa6c8000000-7fa6c8021000 rw-p 00000000 00:00 0
7fa6c8021000-7fa6cc000000 ---p 00000000 00:00 0
7fa6cc000000-7fa6cc021000 rw-p 00000000 00:00 0
7fa6cc021000-7fa6d0000000 ---p 00000000 00:00 0
7fa6d0000000-7fa6d0021000 rw-p 00000000 00:00 0
7fa6d0021000-7fa6d4000000 ---p 00000000 00:00 0
7fa6d4000000-7fa6d4021000 rw-p 00000000 00:00 0
7fa6d4021000-7fa6d8000000 ---p 00000000 00:00 0
7fa6d8000000-7fa6d8021000 rw-p 00000000 00:00 0
7fa6d8021000-7fa6dc000000 ---p 00000000 00:00 0
7fa6dc000000-7fa6dc021000 rw-p 00000000 00:00 0
7fa6dc021000-7fa6e0000000 ---p 00000000 00:00 0
7fa6e0000000-7fa6e0021000 rw-p 00000000 00:00 0
7fa6e0021000-7fa6e4000000 ---p 00000000 00:00 0
7fa6e4000000-7fa6e4021000 rw-p 00000000 00:00 0
7fa6e4021000-7fa6e8000000 ---p 00000000 00:00 0
7fa6e8000000-7fa6e8021000 rw-p 00000000 00:00 0
7fa6e8021000-7fa6ec000000 ---p 00000000 00:00 0
7fa6ec000000-7fa6ec021000 rw-p 00000000 00:00 0
7fa6ec021000-7fa6f0000000 ---p 00000000 00:00 0
7fa6f0000000-7fa6f0021000 rw-p 00000000 00:00 0
7fa6f0021000-7fa6f4000000 ---p 00000000 00:00 0
7fa6f4000000-7fa6f4021000 rw-p 00000000 00:00 0
7fa6f4021000-7fa6f8000000 ---p 00000000 00:00 0
7fa6f8000000-7fa6f8021000 rw-p 00000000 00:00 0
7fa6f8021000-7fa6fc000000 ---p 00000000 00:00 0
7fa6fc000000-7fa6fc021000 rw-p 00000000 00:00 0
7fa6fc021000-7fa700000000 ---p 00000000 00:00 0
7fa700000000-7fa700021000 rw-p 00000000 00:00 0
7fa700021000-7fa704000000 ---p 00000000 00:00 0
7fa704000000-7fa704021000 rw-p 00000000 00:00 0
7fa704021000-7fa708000000 ---p 00000000 00:00 0
7fa708000000-7fa708021000 rw-p 00000000 00:00 0
7fa708021000-7fa70c000000 ---p 00000000 00:00 0
7fa70c000000-7fa70c021000 rw-p 00000000 00:00 0
7fa70c021000-7fa710000000 ---p 00000000 00:00 0
7fa710000000-7fa710021000 rw-p 00000000 00:00 0
7fa710021000-7fa714000000 ---p 00000000 00:00 0
7fa714000000-7fa714021000 rw-p 00000000 00:00 0
7fa714021000-7fa718000000 ---p 00000000 00:00 0
7fa718000000-7fa718021000 rw-p 00000000 00:00 0
7fa718021000-7fa71c000000 ---p 00000000 00:00 0
7fa71c000000-7fa71c021000 rw-p 00000000 00:00 0
7fa71c021000-7fa720000000 ---p 00000000 00:00 0
7fa720000000-7fa720021000 rw-p 00000000 00:00 0
7fa720021000-7fa724000000 ---p 00000000 00:00 0
7fa724000000-7fa724021000 rw-p 00000000 00:00 0
7fa724021000-7fa728000000 ---p 00000000 00:00 0
7fa728000000-7fa728021000 rw-p 00000000 00:00 0
7fa728021000-7fa72c000000 ---p 00000000 00:00 0
7fa72c000000-7fa72c021000 rw-p 00000000 00:00 0
7fa72c021000-7fa730000000 ---p 00000000 00:00 0
7fa730000000-7fa730021000 rw-p 00000000 00:00 0
7fa730021000-7fa734000000 ---p 00000000 00:00 0
7fa734000000-7fa734021000 rw-p 00000000 00:00 0
7fa734021000-7fa738000000 ---p 00000000 00:00 0
7fa738000000-7fa738021000 rw-p 00000000 00:00 0
7fa738021000-7fa73c000000 ---p 00000000 00:00 0
7fa73c000000-7fa73c021000 rw-p 00000000 00:00 0
7fa73c021000-7fa740000000 ---p 00000000 00:00 0
7fa740000000-7fa740021000 rw-p 00000000 00:00 0
7fa740021000-7fa744000000 ---p 00000000 00:00 0
7fa744000000-7fa744021000 rw-p 00000000 00:00 0
7fa744021000-7fa748000000 ---p 00000000 00:00 0
7fa748000000-7fa748021000 rw-p 00000000 00:00 0
7fa748021000-7fa74c000000 ---p 00000000 00:00 0
7fa74c000000-7fa74c021000 rw-p 00000000 00:00 0
7fa74c021000-7fa750000000 ---p 00000000 00:00 0
7fa750000000-7fa750021000 rw-p 00000000 00:00 0
7fa750021000-7fa754000000 ---p 00000000 00:00 0
7fa754000000-7fa754021000 rw-p 00000000 00:00 0
7fa754021000-7fa758000000 ---p 00000000 00:00 0
7fa758000000-7fa758021000 rw-p 00000000 00:00 0
7fa758021000-7fa75c000000 ---p 00000000 00:00 0
7fa75c000000-7fa75c021000 rw-p 00000000 00:00 0
7fa75c021000-7fa760000000 ---p 00000000 00:00 0
7fa760000000-7fa760021000 rw-p 00000000 00:00 0
7fa760021000-7fa764000000 ---p 00000000 00:00 0
7fa764000000-7fa764021000 rw-p 00000000 00:00 0
7fa764021000-7fa768000000 ---p 00000000 00:00 0
7fa768000000-7fa768021000 rw-p 00000000 00:00 0
7fa768021000-7fa76c000000 ---p 00000000 00:00 0
7fa76f36f000-7fa76f384000 r-xp 00000000 fd:00 33753820 /usr/lib64/libgcc_s-4.8.5-20150702.so.1
7fa76f384000-7fa76f583000 ---p 00015000 fd:00 33753820 /usr/lib64/libgcc_s-4.8.5-20150702.so.1
7fa76f583000-7fa76f584000 r--p 00014000 fd:00 33753820 /usr/lib64/libgcc_s-4.8.5-20150702.so.1
7fa76f584000-7fa76f585000 rw-p 00015000 fd:00 33753820 /usr/lib64/libgcc_s-4.8.5-20150702.so.1
7fa76f59c000-7fa76f59d000 ---p 00000000 00:00 0
7fa76f59d000-7fa76f5ad000 rw-p 00000000 00:00 0
7fa76f5ad000-7fa76f5ae000 ---p 00000000 00:00 0
7fa76f5ae000-7fa76f5be000 rw-p 00000000 00:00 0
7fa76f5be000-7fa76f5bf000 ---p 00000000 00:00 0
7fa76f5bf000-7fa76f5cf000 rw-p 00000000 00:00 0
7fa76f5cf000-7fa76f5d0000 ---p 00000000 00:00 0
7fa76f5d0000-7fa76f5e0000 rw-p 00000000 00:00 0
7fa76f5e0000-7fa76f5e1000 ---p 00000000 00:00 0
7fa76f5e1000-7fa76f5f1000 rw-p 00000000 00:00 0
7fa76f5f1000-7fa76f5f2000 ---p 00000000 00:00 0
7fa76f5f2000-7fa76f602000 rw-p 00000000 00:00 0
7fa76f602000-7fa76f603000 ---p 00000000 00:00 0
7fa76f603000-7fa76f613000 rw-p 00000000 00:00 0
7fa76f613000-7fa76f614000 ---p 00000000 00:00 0
7fa76f614000-7fa76f624000 rw-p 00000000 00:00 0
7fa76f624000-7fa76f625000 ---p 00000000 00:00 0
7fa76f625000-7fa76f635000 rw-p 00000000 00:00 0
7fa76f635000-7fa76f636000 ---p 00000000 00:00 0
7fa76f636000-7fa76f646000 rw-p 00000000 00:00 0
7fa76f646000-7fa76f647000 ---p 00000000 00:00 0
7fa76f647000-7fa76f657000 rw-p 00000000 00:00 0
7fa76f657000-7fa76f658000 ---p 00000000 00:00 0
7fa76f658000-7fa76f668000 rw-p 00000000 00:00 0
7fa76f668000-7fa76f669000 ---p 00000000 00:00 0
7fa76f669000-7fa76f679000 rw-p 00000000 00:00 0
7fa76f679000-7fa76f67a000 ---p 00000000 00:00 0
7fa76f67a000-7fa76f68a000 rw-p 00000000 00:00 0
7fa76f68a000-7fa76f68b000 ---p 00000000 00:00 0
7fa76f68b000-7fa76f69b000 rw-p 00000000 00:00 0
7fa76f69b000-7fa76f69c000 ---p 00000000 00:00 0
7fa76f69c000-7fa76f6ac000 rw-p 00000000 00:00 0
7fa76f6ac000-7fa76f6ad000 ---p 00000000 00:00 0
7fa76f6ad000-7fa76f6bd000 rw-p 00000000 00:00 0
7fa76f6bd000-7fa76f6be000 ---p 00000000 00:00 0
7fa76f6be000-7fa76f6ce000 rw-p 00000000 00:00 0
7fa76f6ce000-7fa76f6cf000 ---p 00000000 00:00 0
7fa76f6cf000-7fa76f6df000 rw-p 00000000 00:00 0
7fa76f6df000-7fa76f6e0000 ---p 00000000 00:00 0
7fa76f6e0000-7fa76f6f0000 rw-p 00000000 00:00 0
7fa76f6f0000-7fa76f6f1000 ---p 00000000 00:00 0
7fa76f6f1000-7fa76f701000 rw-p 00000000 00:00 0
7fa76f701000-7fa76f702000 ---p 00000000 00:00 0
7fa76f702000-7fa76f712000 rw-p 00000000 00:00 0
7fa76f712000-7fa76f713000 ---p 00000000 00:00 0
7fa76f713000-7fa76f723000 rw-p 00000000 00:00 0
7fa76f723000-7fa76f724000 ---p 00000000 00:00 0
7fa76f724000-7fa76f734000 rw-p 00000000 00:00 0
7fa76f734000-7fa76f735000 ---p 00000000 00:00 0
7fa76f735000-7fa76f745000 rw-p 00000000 00:00 0
7fa76f745000-7fa76f746000 ---p 00000000 00:00 0
7fa76f746000-7fa76f756000 rw-p 00000000 00:00 0
7fa76f756000-7fa76f757000 ---p 00000000 00:00 0
7fa76f757000-7fa76f767000 rw-p 00000000 00:00 0
7fa76f767000-7fa76f768000 ---p 00000000 00:00 0
7fa76f768000-7fa76f778000 rw-p 00000000 00:00 0
7fa76f778000-7fa76f779000 ---p 00000000 00:00 0
7fa76f779000-7fa76f789000 rw-p 00000000 00:00 0
7fa76f789000-7fa76f78a000 ---p 00000000 00:00 0
7fa76f78a000-7fa76f79a000 rw-p 00000000 00:00 0
7fa76f79a000-7fa76f79b000 ---p 00000000 00:00 0
7fa76f79b000-7fa76f7ab000 rw-p 00000000 00:00 0
7fa76f7ab000-7fa76f7ac000 ---p 00000000 00:00 0
7fa76f7ac000-7fa76f7bc000 rw-p 00000000 00:00 0
7fa76f7bc000-7fa76f7bd000 ---p 00000000 00:00 0
7fa76f7bd000-7fa76f7cd000 rw-p 00000000 00:00 0
7fa76f7cd000-7fa76f7ce000 ---p 00000000 00:00 0
7fa76f7ce000-7fa76f7de000 rw-p 00000000 00:00 0
7fa76f7de000-7fa76f7df000 ---p 00000000 00:00 0
7fa76f7df000-7fa76f7ef000 rw-p 00000000 00:00 0
7fa76f7ef000-7fa76f9b3000 r-xp 00000000 fd:00 33796604 /usr/lib64/libc-2.17.so
7fa76f9b3000-7fa76fbb2000 ---p 001c4000 fd:00 33796604 /usr/lib64/libc-2.17.so
7fa76fbb2000-7fa76fbb6000 r--p 001c3000 fd:00 33796604 /usr/lib64/libc-2.17.so
7fa76fbb6000-7fa76fbb8000 rw-p 001c7000 fd:00 33796604 /usr/lib64/libc-2.17.so
7fa76fbb8000-7fa76fbbd000 rw-p 00000000 00:00 0
7fa76fbbd000-7fa76fbd4000 r-xp 00000000 fd:00 33838607 /usr/lib64/libpthread-2.17.so
7fa76fbd4000-7fa76fdd3000 ---p 00017000 fd:00 33838607 /usr/lib64/libpthread-2.17.so
7fa76fdd3000-7fa76fdd4000 r--p 00016000 fd:00 33838607 /usr/lib64/libpthread-2.17.so
7fa76fdd4000-7fa76fdd5000 rw-p 00017000 fd:00 33838607 /usr/lib64/libpthread-2.17.so
7fa76fdd5000-7fa76fdd9000 rw-p 00000000 00:00 0
7fa76fdd9000-7fa76fdfb000 r-xp 00000000 fd:00 33796588 /usr/lib64/ld-2.17.so
7fa76fe03000-7fa76fe04000 ---p 00000000 00:00 0
7fa76fe04000-7fa76fe14000 rw-p 00000000 00:00 0
7fa76fe14000-7fa76fe15000 ---p 00000000 00:00 0
7fa76fe15000-7fa76fe25000 rw-p 00000000 00:00 0
7fa76fe25000-7fa76fe26000 ---p 00000000 00:00 0
7fa76fe26000-7fa76fe36000 rw-p 00000000 00:00 0
7fa76fe36000-7fa76fe37000 ---p 00000000 00:00 0
7fa76fe37000-7fa76fe47000 rw-p 00000000 00:00 0
7fa76fe47000-7fa76fe48000 ---p 00000000 00:00 0
7fa76fe48000-7fa76fe58000 rw-p 00000000 00:00 0
7fa76fe58000-7fa76fe59000 ---p 00000000 00:00 0
7fa76fe59000-7fa76fe69000 rw-p 00000000 00:00 0
7fa76fe69000-7fa76fe6a000 ---p 00000000 00:00 0
7fa76fe6a000-7fa76fe7a000 rw-p 00000000 00:00 0
7fa76fe7a000-7fa76fe7b000 ---p 00000000 00:00 0
7fa76fe7b000-7fa76fe8b000 rw-p 00000000 00:00 0
7fa76fe8b000-7fa76fe8c000 ---p 00000000 00:00 0
7fa76fe8c000-7fa76fe9c000 rw-p 00000000 00:00 0
7fa76fe9c000-7fa76fe9d000 ---p 00000000 00:00 0
7fa76fe9d000-7fa76fead000 rw-p 00000000 00:00 0
7fa76fead000-7fa76feae000 ---p 00000000 00:00 0
7fa76feae000-7fa76febe000 rw-p 00000000 00:00 0
7fa76febe000-7fa76febf000 ---p 00000000 00:00 0
7fa76febf000-7fa76fecf000 rw-p 00000000 00:00 0
7fa76fecf000-7fa76fed0000 ---p 00000000 00:00 0
7fa76fed0000-7fa76fee0000 rw-p 00000000 00:00 0
7fa76fee0000-7fa76fee1000 ---p 00000000 00:00 0
7fa76fee1000-7fa76fef1000 rw-p 00000000 00:00 0
7fa76fef1000-7fa76fef2000 ---p 00000000 00:00 0
7fa76fef2000-7fa76ff02000 rw-p 00000000 00:00 0
7fa76ff02000-7fa76ff03000 ---p 00000000 00:00 0
7fa76ff03000-7fa76ff13000 rw-p 00000000 00:00 0
7fa76ff13000-7fa76ff14000 ---p 00000000 00---------------------------------------------------------
:00 0
7fa76ff14000-7fa76ff24000 rw-p 00000000 00:00 0
7fa76ff24000-7fa76ff25000 ---p 00000000 00:00 0
7fa76ff25000-7fa76ff35000 rw-p 00000000 00:00 0
7fa76ff35000-7fa76ff36000 ---p 00000000 00:00 0
7fa76ff36000-7fa76ff46000 rw-p 00000000 00:00 0
7fa76ff46000-7fa76ff47000 ---p 00000000 00:00 0
7fa76ff47000-7fa76ff57000 rw-p 00000000 00:00 0
7fa76ff57000-7fa76ff58000 ---p 00000000 00:00 0
7fa76ff58000-7fa76ff68000 rw-p 00000000 00:00 0
7fa76ff68000-7fa76ff69000 ---p 00000000 00:00 0
7fa76ff69000-7fa76ff79000 rw-p 00000000 00:00 0
7fa76ff79000-7fa76ff7a000 ---p 00000000 00:00 0
7fa76ff7a000-7fa76ff8a000 rw-p 00000000 00:00 0
7fa76ff8a000-7fa76ff8b000 ---p 00000000 00:00 0
7fa76ff8b000-7fa76ff9b000 rw-p 00000000 00:00 0
7fa76ff9b000-7fa76ff9c000 ---p 00000000 00:00 0
7fa76ff9c000-7fa76ffac000 rw-p 00000000 00:00 0
7fa76ffac000-7fa76ffad000 ---p 00000000 00:00 0
7fa76ffad000-7fa76ffbd000 rw-p 00000000 00:00 0
7fa76ffbd000-7fa76ffbe000 ---p 00000000 00:00 0
7fa76ffbe000-7fa76ffce000 rw-p 00000000 00:00 0
7fa76ffce000-7fa76ffcf000 ---p 00000000 00:00 0
7fa76ffcf000-7fa76ffe2000 rw-p 00000000 00:00 0
7fa76ffe4000-7fa76ffe7000 rw-p 00000000 00:00 0
7fa76ffe7000-7fa76ffe8000 ---p 00000000 00:00 0
7fa76ffe8000-7fa76fffa000 rw-p 00000000 00:00 0
7fa76fffa000-7fa76fffb000 r--p 00021000 fd:00 33796588 /usr/lib64/ld-2.17.so
7fa76fffb000-7fa76fffc000 rw-p 00022000 fd:00 33796588 /usr/lib64/ld-2.17.so
7fa76fffc000-7fa76fffd000 rw-p 00000000 00:00 0
7fff70000000-7fff706d8000 rw-p 00000000 00:00 0 [stack]
7fff707d4000-7fff707d6000 r-xp 00000000 00:00 0 [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0 [vsyscall]

local system is

lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 48
On-line CPU(s) list: 0-47
Thread(s) per core: 2
Core(s) per socket: 12
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6146 CPU @ 3.20GHz
Stepping: 4
CPU MHz: 3781.835
CPU max MHz: 4200.0000
CPU min MHz: 1200.0000
BogoMIPS: 6400.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 25344K
NUMA node0 CPU(s): 0-11,24-35
NUMA node1 CPU(s): 12-23,36-47
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_ppin intel_pt ssbd mba ibrs ibpb stibp tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts pku ospke md_clear spec_ctrl intel_stibp flush_l1d

remote system is

lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 80
On-line CPU(s) list: 0-79
Thread(s) per core: 2
Core(s) per socket: 20
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 85
Model name: Intel(R) Xeon(R) Gold 6248 CPU @ 2.50GHz
Stepping: 7
CPU MHz: 2501.000
CPU max MHz: 2501.0000
CPU min MHz: 1000.0000
BogoMIPS: 5000.00
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 1024K
L3 cache: 28160K
NUMA node0 CPU(s): 0-19,40-59
NUMA node1 CPU(s): 20-39,60-79
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb cat_l3 cdp_l3 invpcid_single intel_pt ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm mpx rdt_a avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts hwp hwp_act_window hwp_epp hwp_pkg_req pku ospke avx512_vnni md_clear spec_ctrl intel_stibp flush_l1d arch_capabilities

Support io_uring

io_uring first appeared in Linux 5.1, and is in heavy development. It is a new async I/O primitive that addresses a lot of issues in the traditional readiness-based I/O model that Unix uses, that significantly reduces latency and increases throughput. This article is a good introduction.

With the advent of fat pipes, saturating a link with the traditional API is becoming increasingly difficult. Switching to something like this should help achieve that goal.

Send throughput not actually testable

The windows version of ntttcp has the ability for a single sender to send to multiple receivers which does not work with this version of the utility. This served to essentially eliminate the receiver sides as bottlenecks allowing the client's send throughput to be fully measured. The windows syntax looked something like this: sender "ntttcp -s -t 60 -m 8,*,192.168.0.2 8,*,192.168.0.3 -l 256k -a 2 -p 5000"; receiver 1 "ntttcp -r -t 60 -m 8,*,192.168.0.2 -a 16 -p 5000"; receiver 2 "ntttcp -r -t 60 -m 8,*,192.168.0.3 -a 16 -p 5008".

set_socket_non_blocking() is commented out in udpstream.c

In udpstream.c, the set_socket_non_blocking() call is commented out. Any reason why?

I thought non-blocking I/O is the only default I/O model implemented ntttcp-for-linux, and it is better for Tput performance (than blocking I/O). The tcp mode is still setting non-blocking socket mode.

Can we uncomment it?

JSON log output

Currently it support '-x' option to print results into xml log file. We might need to provide another option (like '-j') to support JSON file too.

A new name to this tool?

I know it is not a good practice to rename a tool, especially when there are lots of users are using it everyday.

Many people asked me questions that, ntttcp-for-linux actually is so different from the Windows version of ntttcp, why we still follow the name? Example of the difference is the tool options/usage. Keep such name (ntttcp-for-linux) would be confusing to some of us.

To be frankly, those are good questions. I am seriously considering to give a new name to this tool. For example, nspeed, or netspeed?

UDP throughput is not credible: shows 224Gbps on a host constrained by PCIe Gen3x8 (~56Gbps). TCP throughput on the same link is always spot on.

ntttcp reports tcp throughput just fine. UDP sometimes is >= 4x max throughput.

nload shows

Curr: 49.43 GBit/s
Max: 50.36 GBit/s

while ntttcp reports 224.59Gbps simultaneously.

ssh @ ntttcp -s 192.168.100.1 -t 60 -W 2 -C 2 -u --show-nic-packets enp24s0f0 --show-dev-interrupts mlx5_0 -O ntttcp.enp24s0f0.2021-01-22T17:06:10-0600

  • ntttcp -r -t 60 -W 2 -C 2 -u --show-nic-packets enp24s0f0 --show-dev-interrupts mlx5_0 -O ntttcp.enp24s0f0.2021-01-22T17:06:10-0600
    NTTTCP for Linux 1.4.0

17:06:13 INFO: the UDP send size is too big. use the max value
17:06:13 INFO: 17 threads created
NTTTCP for Linux 1.4.0

17:09:18 INFO: the UDP send size is too big. use the max value
17:06:13 INFO: Test cycle time negotiated is: 64 seconds
17:09:18 INFO: Test cycle time negotiated is: 64 seconds
17:09:18 INFO: 64 threads created
17:06:13 INFO: Network activity progressing...
17:09:18 INFO: 64 connections created in 11400 microseconds
17:09:18 INFO: Network activity progressing...
Real-time throughput: 224.59Gbps
Real-time throughput: 3.88Gbps
Real-time throughput: 220.25Gbps
Real-time throughput: 4.79Gbps
Real-time throughput: 212.56Gbps
Real-time throughput: 5.78Gbps
Real-time throughput: 211.80Gbps
17:09:20 INFO: Test warmup completed.
Real-time throughput: 14.03Gbps
17:06:15 INFO: Test warmup completed.
Real-time throughput: 14.66Gbps
Real-time throughput: 211.70Gbps
Real-time throughput: 13.00Gbps
Real-time throughput: 211.11Gbps
Real-time throughput: 13.66Gbps
Real-time throughput: 212.03Gbps
Real-time throughput: 13.76Gbps
Real-time throughput: 211.90Gbps
Real-time throughput: 12.55Gbps
Real-time throughput: 211.42Gbps
Real-time throughput: 13.34Gbps
Real-time throughput: 212.46Gbps
Real-time throughput: 14.92Gbps
Real-time throughput: 212.47Gbps
Real-time throughput: 10.97Gbps
Real-time throughput: 211.27Gbps
Real-time throughput: 13.45Gbps
Real-time throughput: 212.22Gbps
Real-time throughput: 13.25Gbps
Real-time throughput: 211.98Gbps
Real-time throughput: 14.44Gbps
Real-time throughput: 211.84Gbps
Real-time throughput: 12.64Gbps
Real-time throughput: 211.77Gbps
Real-time throughput: 12.38Gbps
Real-time throughput: 211.97Gbps
Real-time throughput: 12.02Gbps
Real-time throughput: 211.69Gbps
Real-time throughput: 10.27Gbps
Real-time throughput: 211.42Gbps
Real-time throughput: 10.74Gbps
Real-time throughput: 210.76Gbps
Real-time throughput: 13.98Gbps
Real-time throughput: 212.06Gbps
Real-time throughput: 15.31Gbps
Real-time throughput: 212.02Gbps
Real-time throughput: 14.34Gbps
Real-time throughput: 212.02Gbps
Real-time throughput: 15.50Gbps
Real-time throughput: 210.64Gbps
Real-time throughput: 10.82Gbps
Real-time throughput: 211.07Gbps
Real-time throughput: 9.45Gbps
Real-time throughput: 212.18Gbps
Real-time throughput: 12.33Gbps
Real-time throughput: 212.24Gbps
Real-time throughput: 15.33Gbps
Real-time throughput: 209.95Gbps
Real-time throughput: 11.81Gbps
Real-time throughput: 211.70Gbps
Real-time throughput: 14.93Gbps
Real-time throughput: 212.42Gbps
Real-time throughput: 12.86Gbps
Real-time throughput: 211.48Gbps
Real-time throughput: 12.88Gbps
Real-time throughput: 209.77Gbps
Real-time throughput: 12.73Gbps
Real-time throughput: 210.95Gbps
Real-time throughput: 11.34Gbps
Real-time throughput: 211.52Gbps
Real-time throughput: 9.82Gbps
Real-time throughput: 212.21Gbps
Real-time throughput: 13.47Gbps
Real-time throughput: 210.27Gbps
Real-time throughput: 12.01Gbps
Real-time throughput: 211.72Gbps
Real-time throughput: 13.58Gbps
Real-time throughput: 212.00Gbps
Real-time throughput: 16.84Gbps
Real-time throughput: 210.88Gbps
Real-time throughput: 14.13Gbps
Real-time throughput: 210.31Gbps
Real-time throughput: 10.85Gbps
Real-time throughput: 211.57Gbps
Real-time throughput: 13.40Gbps
Real-time throughput: 212.02Gbps
Real-time throughput: 10.63Gbps
Real-time throughput: 211.19Gbps
Real-time throughput: 15.06Gbps
Real-time throughput: 211.32Gbps
Real-time throughput: 16.16Gbps
Real-time throughput: 211.58Gbps
Real-time throughput: 12.64Gbps
Real-time throughput: 212.02Gbps
Real-time throughput: 13.56Gbps
Real-time throughput: 210.69Gbps
Real-time throughput: 10.34Gbps
Real-time throughput: 211.79Gbps
Real-time throughput: 11.16Gbps
Real-time throughput: 211.23Gbps
Real-time throughput: 14.99Gbps
Real-time throughput: 212.00Gbps
Real-time throughput: 13.03Gbps
Real-time throughput: 209.73Gbps
Real-time throughput: 12.87Gbps
Real-time throughput: 212.41Gbps
Real-time throughput: 14.36Gbps
Real-time throughput: 212.22Gbps
Real-time throughput: 14.59Gbps
Real-time throughput: 211.84Gbps
Real-time throughput: 13.82Gbps
Real-time throughput: 209.68Gbps
Real-time throughput: 13.38Gbps
Real-time throughput: 210.55Gbps
Real-time throughput: 15.46Gbps
Real-time throughput: 211.65Gbps
Real-time throughput: 13.07Gbps
Real-time throughput: 211.94Gbps
Real-time throughput: 18.01Gbps
Real-time throughput: 210.38Gbps
Real-time throughput: 16.14Gbps
Real-time throughput: 210.69Gbps
Real-time throughput: 11.06Gbps
Real-time throughput: 211.91Gbps
Real-time throughput: 12.12Gbps
Real-time throughput: 210.89Gbps
Real-time throughput: 13.91Gbps
Real-time throughput: 209.28Gbps
Real-time throughput: 12.82Gbps
Real-time throughput: 210.86Gbps
Real-time throughput: 13.93Gbps
Real-time throughput: 210.79Gbps
Real-time throughput: 15.03Gbps
Real-time throughput: 210.89Gbps
Real-time throughput: 15.58Gbps
Real-time throughput: 210.79Gbps
Real-time throughput: 8.64Gbps
Real-time throughput: 211.63Gbps
Real-time throughput: 9.77Gbps
Real-time throughput: 210.34Gbps
Real-time throughput: 12.94Gbps
Real-time throughput: 210.76Gbps
Real-time throughput: 14.53Gbps
Real-time throughput: 211.00Gbps
Real-time throughput: 13.62Gbps
Real-time throughput: 211.46Gbps
Real-time throughput: 9.33Gbps
Real-time throughput: 210.83Gbps
Real-time throughput: 13.55Gbps
Real-time throughput: 210.12Gbps
Real-time throughput: 13.32Gbps
Real-time throughput: 210.33Gbps
Real-time throughput: 11.53Gbps
Real-time throughput: 210.46Gbps
Real-time throughput: 11.87Gbps
Real-time throughput: 211.90Gbps
Real-time throughput: 13.31Gbps
Real-time throughput: 209.85Gbps
Real-time throughput: 13.80Gbps
Real-time throughput: 211.07Gbps
Real-time throughput: 13.50Gbps
Real-time throughput: 211.90Gbps
Real-time throughput: 12.51Gbps
Real-time throughput: 211.89Gbps
Real-time throughput: 16.05Gbps
Real-time throughput: 210.71Gbps
Real-time throughput: 11.61Gbps
Real-time throughput: 212.42Gbps
Real-time throughput: 12.37Gbps
Real-time throughput: 210.53Gbps
Real-time throughput: 11.23Gbps
Real-time throughput: 211.46Gbps
Real-time throughput: 12.38Gbps
Real-time throughput: 209.61Gbps
Real-time throughput: 14.87Gbps
Real-time throughput: 210.24Gbps
Real-time throughput: 10.98Gbps
Real-time throughput: 210.25Gbps
Real-time throughput: 14.29Gbps
Real-time throughput: 209.46Gbps
Real-time throughput: 15.74Gbps
Real-time throughput: 209.34Gbps
Real-time throughput: 12.05Gbps
Real-time throughput: 210.47Gbps
Real-time throughput: 12.75Gbps
Real-time throughput: 210.93Gbps
Real-time throughput: 13.16Gbps
Real-time throughput: 210.22Gbps
Real-time throughput: 12.84Gbps
Real-time throughput: 209.33Gbps
Real-time throughput: 17.58Gbps
Real-time throughput: 210.90Gbps
Real-time throughput: 13.08Gbps
Real-time throughput: 209.22Gbps
Real-time throughput: 13.31Gbps
Real-time throughput: 208.82Gbps
Real-time throughput: 12.14Gbps
Real-time throughput: 208.85Gbps
Real-time throughput: 15.88Gbps
Real-time throughput: 209.82Gbps
Real-time throughput: 14.67Gbps
Real-time throughput: 210.51Gbps
Real-time throughput: 12.32Gbps
Real-time throughput: 208.36Gbps
Real-time throughput: 14.17Gbps
Real-time throughput: 209.58Gbps
Real-time throughput: 15.51Gbps
Real-time throughput: 209.66Gbps
Real-time throughput: 9.86Gbps
Real-time throughput: 209.85Gbps
Real-time throughput: 13.42Gbps
Real-time throughput: 207.84Gbps
Real-time throughput: 14.68Gbps
Real-time throughput: 210.13Gbps
Real-time throughput: 14.73Gbps
Real-time throughput: 209.80Gbps
Real-time throughput: 14.83Gbps
Real-time throughput: 210.57Gbps
Real-time throughput: 17.17Gbps
Real-time throughput: 208.10Gbps
Real-time throughput: 14.25Gbps
Real-time throughput: 209.61Gbps
Real-time throughput: 11.85Gbps
Real-time throughput: 208.79Gbps
Real-time throughput: 14.92Gbps
17:07:15 INFO: Test run completed.
17:07:15 INFO: Test cooldown is in progress...
Real-time throughput: 210.36Gbps
17:10:20 INFO: Test run completed.
17:10:20 INFO: Test cooldown is in progress...
17:07:17 INFO: Test cycle finished.
17:07:17 INFO: ##### Totals: #####
17:07:17 INFO: test duration :60.35 seconds
17:07:17 INFO: total bytes :100200489805
17:07:17 INFO: throughput :13.28Gbps
17:07:17 INFO: total packets:
17:07:17 INFO: tx_packets :9
17:07:17 INFO: rx_packets :45032701
17:07:17 INFO: interrupts:
17:07:17 INFO: total :0
17:07:17 INFO: pkts/interrupt :0.00
17:10:22 INFO: Test cycle finished.
17:07:17 INFO: cpu cores :48
17:07:17 INFO: cpu speed :3899.804MHz
17:07:17 INFO: user :0.11%
17:07:17 INFO: system :0.73%
17:07:17 INFO: idle :98.13%
17:07:17 INFO: iowait :0.00%
17:07:17 INFO: softirq :1.03%
17:07:17 INFO: cycles/byte :2.11
17:07:17 INFO: cpu busy (all) :40.20%

17:10:22 INFO: receiver exited from current test
17:10:22 INFO: 64 connections tested
17:10:22 INFO: ##### Totals: #####
17:10:22 INFO: test duration :60.59 seconds
17:10:22 INFO: total bytes :1597111689953
17:10:22 INFO: throughput :210.89Gbps
17:10:22 INFO: total packets:
17:10:22 INFO: tx_packets :45191247
17:10:22 INFO: rx_packets :9
17:10:22 INFO: interrupts:
17:10:22 INFO: total :0
17:10:22 INFO: pkts/interrupt :0.00
17:10:22 INFO: cpu cores :80
17:10:22 INFO: cpu speed :2501.000MHz
17:10:22 INFO: user :0.37%
17:10:22 INFO: system :71.94%
17:10:22 INFO: idle :18.23%
17:10:22 INFO: iowait :0.00%
17:10:22 INFO: softirq :9.46%
17:10:22 INFO: cycles/byte :6.21
17:10:22 INFO: cpu busy (all) :6373.12%

Socket read error 104 occurred on TCP receiver running in a VM (Sender from a remote VM)

Running Ntttcp (TCP sender) on a linux-based VM to another ntttcp (TCP receiver) on another linux-based VM on a remote host machine. The cmdlines are default.

After running for ~40mins, TCP receiver started error out with "Socket read error 104" (104 is Connection reset by peer). Then within 2 hrs, got 25 more socket read errors, and no traffic flowing from the sender. The sender did not output any error, so not sure why the sender stop transmitting becoz at least I can tell the 4 sockets (23,25,41,49) created on port 6051 are not getting any error, so that port 6051 should still be good at sending packets.

Both VMs continue to be functioning - meaning I can stream web traffic from a browser app running on VM.

This is just a basic default setup TCP un-directional scenario. I'm not sure if this is VM kernel issue or a Host kernel network issue. Is this something that has been seen before running on Azure VM?

---------------- Receiver ------------------
ntttcp -r -V --show-tcp-retrans --show-nic-packets wlan0 -p 6051 -t 86400 <
NTTTCP for Linux 1.4.0

*** receiver role
ports: 16
cpu affinity: *
server address: 0.0.0.0
domain: IPv4
protocol: TCP
server port starting at: 6051
receiver socket buffer (bytes): 65536
test warm-up (sec): no
test duration (sec): 86400
test cool-down (sec): no
show system tcp retransmit: yes
show packets for: wlan0
quiet mode: disabled
verbose mode: enabled

17:11:02 DBG : user limits for maximum number of open files: soft: 32768; hard: 32768
17:11:02 INFO: 17 threads created
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6051
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6050
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6066
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6065
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6064
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6063
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6062
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6061
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6060
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6059
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6058
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6057
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6056
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6055
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6054
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6053
17:11:02 DBG : ntttcp server is listening on 0.0.0.0:6052
17:11:05 DBG : Sync connection: 10.231.111.24:60992 --> local:6050 [socket 22]
17:11:05 DBG : New connection: 10.231.111.24:60946 --> local:6051 [socket 23]
17:11:05 DBG : New connection: 10.231.111.24:60908 --> local:6053 [socket 24]
17:11:05 DBG : New connection: 10.231.111.24:60950 --> local:6051 [socket 25]
17:11:05 DBG : New connection: 10.231.111.24:60886 --> local:6052 [socket 26]
17:11:05 DBG : New connection: 10.231.111.24:60888 --> local:6052 [socket 27]
17:11:05 DBG : New connection: 10.231.111.24:60914 --> local:6053 [socket 28]
17:11:05 DBG : New connection: 10.231.111.24:60794 --> local:6054 [socket 29]
17:11:05 DBG : New connection: 10.231.111.24:60798 --> local:6054 [socket 30]
17:11:05 DBG : New connection: 10.231.111.24:60992 --> local:6055 [socket 31]
17:11:05 DBG : New connection: 10.231.111.24:60988 --> local:6055 [socket 32]
17:11:05 DBG : New connection: 10.231.111.24:60858 --> local:6056 [socket 33]
17:11:05 DBG : New connection: 10.231.111.24:60756 --> local:6057 [socket 34]
17:11:05 DBG : New connection: 10.231.111.24:60762 --> local:6057 [socket 35]
17:11:05 DBG : New connection: 10.231.111.24:60862 --> local:6056 [socket 36]
17:11:05 DBG : New connection: 10.231.111.24:60916 --> local:6058 [socket 37]
17:11:05 DBG : New connection: 10.231.111.24:60912 --> local:6058 [socket 38]
17:11:05 DBG : New connection: 10.231.111.24:60938 --> local:6059 [socket 39]
17:11:05 DBG : New connection: 10.231.111.24:60890 --> local:6052 [socket 40]
17:11:05 DBG : New connection: 10.231.111.24:60948 --> local:6051 [socket 41]
17:11:05 DBG : New connection: 10.231.111.24:60710 --> local:6060 [socket 42]
17:11:05 DBG : New connection: 10.231.111.24:60758 --> local:6057 [socket 48]
17:11:05 DBG : New connection: 10.231.111.24:60864 --> local:6056 [socket 51]
17:11:05 DBG : New connection: 10.231.111.24:60944 --> local:6051 [socket 49]
17:11:05 DBG : New connection: 10.231.111.24:60910 --> local:6053 [socket 47]
17:11:05 DBG : New connection: 10.231.111.24:60716 --> local:6060 [socket 50]
17:11:05 DBG : New connection: 10.231.111.24:60994 --> local:6055 [socket 45]
17:11:05 DBG : New connection: 10.231.111.24:60942 --> local:6059 [socket 43]
17:11:05 DBG : New connection: 10.231.111.24:60800 --> local:6054 [socket 44]
17:11:05 DBG : New connection: 10.231.111.24:60910 --> local:6063 [socket 55]
17:11:05 DBG : New connection: 10.231.111.24:60884 --> local:6052 [socket 46]
17:11:05 DBG : New connection: 10.231.111.24:60912 --> local:6063 [socket 59]
17:11:05 DBG : New connection: 10.231.111.24:60918 --> local:6058 [socket 54]
17:11:05 DBG : New connection: 10.231.111.24:60914 --> local:6058 [socket 61]
17:11:05 DBG : New connection: 10.231.111.24:60860 --> local:6056 [socket 52]
17:11:05 DBG : New connection: 10.231.111.24:60728 --> local:6061 [socket 53]
17:11:05 DBG : New connection: 10.231.111.24:60700 --> local:6064 [socket 58]
17:11:05 DBG : New connection: 10.231.111.24:60712 --> local:6060 [socket 57]
17:11:05 DBG : New connection: 10.231.111.24:60908 --> local:6063 [socket 60]
17:11:05 DBG : New connection: 10.231.111.24:60826 --> local:6062 [socket 56]
17:11:05 DBG : New connection: 10.231.111.24:60796 --> local:6054 [socket 72]
17:11:05 DBG : New connection: 10.231.111.24:60990 --> local:6055 [socket 62]
17:11:05 DBG : New connection: 10.231.111.24:60732 --> local:6061 [socket 68]
17:11:05 DBG : New connection: 10.231.111.24:60702 --> local:6064 [socket 70]
17:11:05 DBG : New connection: 10.231.111.24:60730 --> local:6061 [socket 73]
17:11:05 DBG : New connection: 10.231.111.24:60824 --> local:6062 [socket 67]
17:11:05 DBG : New connection: 10.231.111.24:60714 --> local:6060 [socket 65]
17:11:05 DBG : New connection: 10.231.111.24:60822 --> local:6062 [socket 75]
17:11:05 DBG : New connection: 10.231.111.24:60768 --> local:6065 [socket 66]
17:11:05 DBG : New connection: 10.231.111.24:60916 --> local:6063 [socket 69]
17:11:05 DBG : New connection: 10.231.111.24:60944 --> local:6059 [socket 63]
17:11:05 DBG : New connection: 10.231.111.24:60912 --> local:6053 [socket 71]
17:11:05 DBG : New connection: 10.231.111.24:60998 --> local:6064 [socket 74]
17:11:05 DBG : New connection: 10.231.111.24:60760 --> local:6057 [socket 64]
17:11:05 DBG : New connection: 10.231.111.24:60704 --> local:6064 [socket 79]
17:11:05 DBG : New connection: 10.231.111.24:60834 --> local:6062 [socket 76]
17:11:05 DBG : New connection: 10.231.111.24:60734 --> local:6061 [socket 77]
17:11:05 DBG : New connection: 10.231.111.24:60940 --> local:6059 [socket 78]
17:11:05 DBG : New connection: 10.231.111.24:60770 --> local:6065 [socket 80]
17:11:05 DBG : New connection: 10.231.111.24:60766 --> local:6065 [socket 81]
17:11:05 DBG : New connection: 10.231.111.24:60846 --> local:6066 [socket 82]
17:11:05 DBG : New connection: 10.231.111.24:60840 --> local:6066 [socket 83]
17:11:05 DBG : New connection: 10.231.111.24:60792 --> local:6065 [socket 84]
17:11:05 DBG : New connection: 10.231.111.24:60844 --> local:6066 [socket 85]
17:11:05 DBG : New connection: 10.231.111.24:60842 --> local:6066 [socket 86]
17:11:05 INFO: Network activity progressing...
socket read error: 104466.65Kbps
17:53:07 DBG : socket closed: 36
socket read error: 104
17:53:07 DBG : socket closed: 29
Real-time throughput: 0.00bps
socket read error: 1040.00bps
socket read error: 104
socket read error: 104
18:59:39 DBG : socket closed: 86
18:59:39 DBG : socket closed: 70
18:59:39 DBG : socket closed: 59
socket read error: 104
18:59:39 DBG : socket closed: 83
socket read error: 104
socket read error: 104
18:59:39 DBG : socket closed: 58
18:59:39 DBG : socket closed: 62
socket read error: 1040.00bps
19:04:41 DBG : socket closed: 50
socket read error: 104
19:04:41 DBG : socket closed: 79
socket read error: 104
19:04:41 DBG : socket closed: 40
socket read error: 104
socket read error: 104
socket read error: 104
19:04:41 DBG : socket closed: 32
19:04:41 DBG : socket closed: 81
19:04:41 DBG : socket closed: 28
socket read error: 104
19:04:41 DBG : socket closed: 57
socket read error: 1040.00bps
19:04:43 DBG : socket closed: 38
socket read error: 104
19:04:43 DBG : socket closed: 72
socket read error: 104
19:04:43 DBG : socket closed: 61
socket read error: 104
socket read error: 104
19:04:43 DBG : socket closed: 80
19:04:43 DBG : socket closed: 37
socket read error: 104
19:04:43 DBG : socket closed: 77
socket read error: 104
19:04:43 DBG : socket closed: 53
socket read error: 1040.00bps
19:11:51 DBG : socket closed: 47
socket read error: 104
19:11:51 DBG : socket closed: 24
socket read error: 104
socket read error: 104
19:11:51 DBG : socket closed: 85
19:11:51 DBG : socket closed: 69
socket read error: 104
19:11:51 DBG : socket closed: 82
Real-time throughput: 0.00bps

-------------------Sender --------------------
ntttcp -s 10.231.111.166 -V --show-tcp-retrans --show-nic-packets wlan0 -p 6051 -t 86400 <
NTTTCP for Linux 1.4.0

*** sender role
connections: 16 X 4 X 1
cpu affinity: *
server address: 10.231.111.166
domain: IPv4
protocol: TCP
server port starting at: 6051
sender socket buffer (bytes): 131072
test warm-up (sec): no
test duration (sec): 86400
test cool-down (sec): no
show system tcp retransmit: yes
show packets for: wlan0
quiet mode: disabled
verbose mode: enabled

17:11:07 DBG : user limits for maximum number of open files: soft: 32768; hard: 32768
17:11:07 DBG : Sync connection: local:60992 [socket:3] --> 10.231.111.166:6050
17:11:07 DBG : New connection: local:60946 [socket:5] --> 10.231.111.166:6051
17:11:07 DBG : New connection: local:60950 [socket:7] --> 10.231.111.166:6051
17:11:07 DBG : New connection: local:60908 [socket:13] --> 10.231.111.166:6053
17:11:07 DBG : New connection: local:60886 [socket:9] --> 10.231.111.166:6052
17:11:07 DBG : New connection: local:60888 [socket:10] --> 10.231.111.166:6052
17:11:07 DBG : New connection: local:60914 [socket:16] --> 10.231.111.166:6053
17:11:07 DBG : New connection: local:60794 [socket:17] --> 10.231.111.166:6054
17:11:07 DBG : New connection: local:60798 [socket:19] --> 10.231.111.166:6054
17:11:07 DBG : New connection: local:60992 [socket:22] --> 10.231.111.166:6055
17:11:07 DBG : New connection: local:60988 [socket:21] --> 10.231.111.166:6055
17:11:07 DBG : New connection: local:60858 [socket:25] --> 10.231.111.166:6056
17:11:07 DBG : New connection: local:60756 [socket:29] --> 10.231.111.166:6057
17:11:07 DBG : New connection: local:60762 [socket:32] --> 10.231.111.166:6057
17:11:07 DBG : New connection: local:60862 [socket:27] --> 10.231.111.166:6056
17:11:07 DBG : New connection: local:60912 [socket:33] --> 10.231.111.166:6058
17:11:07 DBG : New connection: local:60916 [socket:35] --> 10.231.111.166:6058
17:11:07 DBG : New connection: local:60938 [socket:37] --> 10.231.111.166:6059
17:11:07 DBG : New connection: local:60942 [socket:39] --> 10.231.111.166:6059
17:11:07 DBG : New connection: local:60948 [socket:6] --> 10.231.111.166:6051
17:11:07 DBG : New connection: local:60944 [socket:4] --> 10.231.111.166:6051
17:11:07 DBG : New connection: local:60890 [socket:12] --> 10.231.111.166:6052
17:11:07 DBG : New connection: local:60910 [socket:14] --> 10.231.111.166:6053
17:11:07 DBG : New connection: local:60884 [socket:8] --> 10.231.111.166:6052
17:11:07 DBG : New connection: local:60944 [socket:40] --> 10.231.111.166:6059
17:11:07 DBG : New connection: local:60800 [socket:20] --> 10.231.111.166:6054
17:11:07 DBG : New connection: local:60710 [socket:41] --> 10.231.111.166:6060
17:11:07 DBG : New connection: local:60712 [socket:42] --> 10.231.111.166:6060
17:11:07 DBG : New connection: local:60994 [socket:24] --> 10.231.111.166:6055
17:11:07 DBG : New connection: local:60912 [socket:15] --> 10.231.111.166:6053
17:11:07 DBG : New connection: local:60990 [socket:23] --> 10.231.111.166:6055
17:11:07 DBG : New connection: local:60864 [socket:28] --> 10.231.111.166:6056
17:11:07 DBG : New connection: local:60716 [socket:44] --> 10.231.111.166:6060
17:11:07 DBG : New connection: local:60758 [socket:30] --> 10.231.111.166:6057
17:11:07 DBG : New connection: local:60796 [socket:18] --> 10.231.111.166:6054
17:11:07 DBG : New connection: local:60730 [socket:46] --> 10.231.111.166:6061
17:11:07 DBG : New connection: local:60860 [socket:26] --> 10.231.111.166:6056
17:11:07 DBG : New connection: local:60940 [socket:38] --> 10.231.111.166:6059
17:11:07 DBG : New connection: local:60826 [socket:52] --> 10.231.111.166:6062
17:11:07 DBG : New connection: local:60714 [socket:43] --> 10.231.111.166:6060
17:11:07 DBG : New connection: local:60728 [socket:45] --> 10.231.111.166:6061
17:11:07 INFO: 64 threads created
17:11:07 DBG : New connection: local:60732 [socket:47] --> 10.231.111.166:6061
17:11:07 DBG : New connection: local:60822 [socket:49] --> 10.231.111.166:6062
17:11:07 DBG : New connection: local:60910 [socket:53] --> 10.231.111.166:6063
17:11:07 DBG : New connection: local:60760 [socket:31] --> 10.231.111.166:6057
17:11:07 DBG : New connection: local:60734 [socket:48] --> 10.231.111.166:6061
17:11:07 DBG : New connection: local:60918 [socket:36] --> 10.231.111.166:6058
17:11:07 DBG : New connection: local:60844 [socket:66] --> 10.231.111.166:6066
17:11:07 DBG : New connection: local:60914 [socket:34] --> 10.231.111.166:6058
17:11:07 DBG : New connection: local:60824 [socket:50] --> 10.231.111.166:6062
17:11:07 DBG : New connection: local:60792 [socket:64] --> 10.231.111.166:6065
17:11:07 DBG : New connection: local:60834 [socket:54] --> 10.231.111.166:6062
17:11:07 DBG : New connection: local:60700 [socket:58] --> 10.231.111.166:6064
17:11:07 DBG : New connection: local:60912 [socket:55] --> 10.231.111.166:6063
17:11:07 DBG : New connection: local:60908 [socket:51] --> 10.231.111.166:6063
17:11:07 DBG : New connection: local:60916 [socket:56] --> 10.231.111.166:6063
17:11:07 DBG : New connection: local:60702 [socket:59] --> 10.231.111.166:6064
17:11:07 DBG : New connection: local:60998 [socket:57] --> 10.231.111.166:6064
17:11:07 DBG : New connection: local:60770 [socket:63] --> 10.231.111.166:6065
17:11:07 DBG : New connection: local:60704 [socket:60] --> 10.231.111.166:6064
17:11:07 DBG : New connection: local:60766 [socket:61] --> 10.231.111.166:6065
17:11:07 DBG : New connection: local:60846 [socket:67] --> 10.231.111.166:6066
17:11:07 DBG : New connection: local:60840 [socket:65] --> 10.231.111.166:6066
17:11:07 DBG : New connection: local:60768 [socket:62] --> 10.231.111.166:6065
17:11:07 DBG : New connection: local:60842 [socket:68] --> 10.231.111.166:6066
17:11:07 INFO: 64 connections created in 103018 microseconds
17:11:07 INFO: Network activity progressing...
Real-time throughput: 0.00bps

Remove structure slops to reduce cache pressure

Many shared structs are ordered in a way that, when built on a LP64 system, will have unused bytes due to alignment. These structs may be reduced by carefully reordering them, and maybe changing certain types of members to something that packs better.

For instance, struct ntttcp_stream_server has a sizeof 144, spanning over 3 cache lines. According to pahole, it could be at least 129, which is 5 bytes over using just 2 cache lines. By changing all the bool elements to a single integer holding flags, this would fit 2 cache lines.

Testing UDP yields "the UDP send size is too big" error

Good evening,

I would like to test UDP throughput between two hosts.
I am using a simple command without any additional attributes (./ntttcp -r -u), but I get the following error.
Strangely enough, it seems that the sender achieves 10Gbits/second throughput while the receiver struggles a bit.

NTTTCP for Linux 1.4.0
01:48:36 INFO: the UDP send size is too big. use the max value
01:48:36 INFO: 17 threads created
01:48:40 INFO: Network activity progressing...
01:48:43 INFO: error: cannot read data from socket: 8

Ntttcp UDP receiver periodically reports error (cannot read data from sender socket) and 0 Mbps real-time Tput amid sender keeps sending UDP traffic.

I'm using nttttcp to send UDP traffic from one VM to another (remote host) VM.
I'm keeping the transfer size to 1024B which is below 1 MTU size (1280B) to avoid any fragmentation.
Below are the cmd lines setup. However, after a while, the receiver periodically reports "error: cannot read data from socket: 3", and also reports 0 Mbps real-time tput. But the sender still keeps sending UDP traffic (see attached screenshot).

There are no other applications running inside both VM except ntttcp processes.
The UDP packets are transferred between port 8011 on both ends.

Questions:
So what are port 60558 (socket 3) on sender and port 8010 (socket 6) on receiver used for in terms of what data it's trying to communicate?
Is the "0 Mbps" real-time tput reported on the receiver side bogus? It appears the receiver host Ethernet (Task manager on the right side of attached screenshot) is still continuously receiving udp packets, which will get steered into the VM.
How can I debug this to find out what's wrong with ntttcp?

Thanks.

Ntttcp Receiver

windows_x86_64:/data/local/tmp/test $ ./ntttcp -r -m 1,*,10.231.111.24 -V -p 8011 -t 86400 -u -b 1024
NTTTCP for Linux 1.4.0

*** receiver role
ports: 1
cpu affinity: *
server address: 10.231.111.24
domain: IPv4
protocol: UDP
server port starting at: 8011
receiver socket buffer (bytes): 1024
test warm-up (sec): no
test duration (sec): 86400
test cool-down (sec): no
show system tcp retransmit: no
quiet mode: disabled
verbose mode: enabled

23:49:48 DBG : user limits for maximum number of open files: soft: 32768; hard: 32768
23:49:48 DBG : Interface:[lo] Address: 127.0.0.1
23:49:48 DBG : Interface:[wlan0] Address: 10.231.111.24
23:49:48 INFO: 2 threads created
23:49:48 DBG : ntttcp server is listening on 10.231.111.24:8010
23:49:54 DBG : Sync connection: 10.231.111.166:60558 --> local:8010 [socket 6]
23:49:54 INFO: Network activity progressing...

Ntttcp Sender

1|windows_x86_64:/data/local/tmp/test $ ./ntttcp -s -m 1,*,10.231.111.24 -V -p 8011 -t 86400 -u -n 1 -b 1024
NTTTCP for Linux 1.4.0

*** sender role
connections: 1 X 1 X 1
cpu affinity: *
server address: 10.231.111.24
domain: IPv4
protocol: UDP
server port starting at: 8011
sender socket buffer (bytes): 1024
test warm-up (sec): no
test duration (sec): 86400
test cool-down (sec): no
show system tcp retransmit: no
quiet mode: disabled
verbose mode: enabled

23:49:52 DBG : user limits for maximum number of open files: soft: 32768; hard: 32768
23:49:52 DBG : Sync connection: local:60558 [socket:3] --> 10.231.111.24:8010
23:49:52 INFO: 1 threads created
23:49:52 DBG : Running UDP stream: local:0 [socket:4] --> 10.231.111.24:8011
23:49:52 INFO: 1 connections created in 2086 microseconds
23:49:52 INFO: Network activity progressing...

ntttcp_udp_uniDirectional_receiverShowsZeroBitsPerSecTPut_Screenshot 2022-05-12 171957

Question Related to the Code

I've been studying this code for a few days to learn more about network programming. I've learned some good things, Thanks!

My question is
I've seen this use of memset in other code and sample, but I'm curious as to why cast the struct to (char *)
instead of just memset(&serv_addr, 0, sa_size)

memset((char*)&serv_addr, 0, sa_size);

Thanks in advance,

  • Roberto

Receiver stuck forever from the beginning when running UDP protocol in no-sync mode (no issue in sync mode)

ntttcp for linux version: 1.4.0 (latest)

Cmdlines:
Sender: ntttcp -s -m 1,, -u -V -t 30 -b 1470 -N # or any -b
Receiver: ntttcp -r -m 1,
, -u -V -t 30 -N

Sender will completed with output indicating the throughput, cycles/byte, time, etc.

However, the receiver just stuck forever from the beginning (last message: "INFO: 1 threads created").

No issue in TCP protocol (sync/no-sync), and no issue in UDP sync mode.

Impact:
We have use case using Windows as ntttcp sender (and Linux as ntttcp receiver), and only no-sync mode is supported in ntttcp between windows and linux.

Discrepancy on tput metric between console output log & the xml output log

The TPUT metric in the console output & the corresponding xml output does not match. It's seen in both TCP and UDP modes.

Attached below from TCP. the tcp_send*.log & tcp_receive*.xml both agreed for their tput. however, the tcp_send*.xml tput is totally different and reported lower tput which is not correct. The cpu busy and cycles/byte for both tcp_send*.log & tcp_send*.xml appear identical.
Ubuntu_ToWinHost_tcp_1conn_65536b_run0.zip

Attached below from UDP. Same symptom as described above.
Ubuntu_ToWinHost_udp_1conn_1472b_run0.zip

nttcp Receive: "failed to bind the socket to local address errcode = -1. errcode = 98" Errors

Hi

Hi am seeing "Failed to bind the Socket to Local Address errcode = -1. errcode = 98" errors when trying to start as a receiver on a RHEL 7.3 OS. I have tried binding to all IP's as well a single IP and with default and single threads.

is there something needed or option i need to enable to get this to work ?

using Default:

/opt/ntttcp-for-linux/src> ./ntttcp -r -V
NTTTCP for Linux 1.2.0

*** receiver role
threads: 16
cpu affinity: *
server address: 0.0.0.0
domain: IPv4
protocol: TCP
server port starting at: 5001
receiver socket buffer (bytes): 65536
test duration (sec): 60
show system tcp retransmit: no
verbose mode: enabled

10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5004
10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5007
10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5002
10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5003
10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5001
10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5005
10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5008
10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5006
10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5011
10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5009
10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5010
10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5012
10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5014
10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5013
10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5015
10:41:05 DBG : ntttcp server is listening on 0.0.0.0:5016
10:41:05 DBG : 17 threads created
10:41:05 DBG : failed to bind the socket to local address: 0.0.0.0 on socket: 19. errcode = -1. errcode = 98
10:41:05 ERR : cannot bind the socket on address: 0.0.0.0

using Single Thread:
/opt/ntttcp-for-linux/src> ./ntttcp -r -m 1 -V
NTTTCP for Linux 1.2.0

*** receiver role
threads: 1
cpu affinity: *
server address: 0.0.0.0
domain: IPv4
protocol: TCP
server port starting at: 5001
receiver socket buffer (bytes): 65536
test duration (sec): 60
show system tcp retransmit: no
verbose mode: enabled

10:42:35 DBG : 2 threads created
10:42:35 DBG : failed to bind the socket to local address: 0.0.0.0 on socket: 3. errcode = -1. errcode = 98
10:42:35 DBG : ntttcp server is listening on 0.0.0.0:5001
10:42:35 ERR : cannot bind the socket on address: 0.0.0.0

OS Details:
Red Hat Enterprise Linux Server 7.3 (Maipo)
uname -a
Linux 3.10.0-514.el7.x86_64 #1 SMP Wed Oct 19 11:24:13 EDT 2016 x86_64 x86_64 x86_64 GNU/Linux

TCP: Total bytes sent (n_send) and received (n_recv) always mis-matched even no dropped/error pkts in network stacks between two end-points.

For TCP protocol test, the total bytes sent (n_send) and received (n_recv) reported at the end of the test are not always matched.
(Note that UDP also have the same issue, although UDP is unreliable protocol, so it's subjected to pkt loss).

I tested TCP transfer from one VM to another remote VM, with 1 tcp stream (1 sender thread/connection -> 1 receiver/connection) with sender and receiver buffer size both set to 1B (-b 1), and ran for 60 second. Below shows the cmdlines setup:

ntttcp -s -m 1,,10.231.111.24 -V --show-tcp-retrans --show-nic-packets eth1 -p 5021 -b 1 -n 1 -t 60
ntttcp -r -m 1,
,0.231.111.24 -V --show-tcp-retrans --show-nic-packets eth0 -p 5021 -b 1

Sender report:
05:03:54 INFO: test duration :60.00 seconds
05:03:54 INFO: total bytes :39287223
05:03:54 INFO: throughput :5.24Mbps

Receiver report:
05:03:51 INFO: test duration :60.00 seconds
05:03:51 INFO: total bytes :38750015
05:03:51 INFO: throughput :5.17Mbps

Sender always report higher total bytes than receiver.
I ran multiple / several times, and they all showed mis-matches between the sender and receiver.

During the test run, I collected the tcpdump traces on both sender and receiver VMs, as well as monitoring the network interface statistics (e.g. ip -s -d link show eth0). They did not report any dropped or error packets.

I also collected pktmon traces on both sender and receiver hosts, but the traces did not indicate any issues like dropped or error.

Attached the test run logs, tcpdump traces, interface statistics (host sides traces are too big):

Tcp_1stream_TotalBytesSendAndRecvdMismatch_1ByteBufferSize.zip

Note that below is one issue that can contribute to the mis-match, but is not the main cause of the above mis-match. I added additional logging in the error return case to output any pkts received prior to the error return. And the mis-match existed even without hitting the error case statement.


One Issue noticed:
It looks like in the n_send and n_recv functions in tcpstream.c, there can be bytes missing from the calculation. For example, in below n_recv function in the 1st else statement which it returns just the error code, but the error could happen after some packets have been received during the while loop, thus the number of those received packets were not returned. Similar symptom on the n_send function as well.

int n_recv(int fd, char *buffer, size_t total)
{
register ssize_t rtn;
register size_t left = total;

while (left > 0) {
	rtn = recv(fd, buffer, left, 0);
	if (rtn < 0) {
		if (errno == EINTR || errno == EAGAIN) {
			break;
		} else {
			printf("socket read error: %d\n", errno);
			return ERROR_NETWORK_READ;
		}
	} else if (rtn == 0)
		break;

	left -= rtn;
	buffer += rtn;
}

return total - left;

}

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.