Coder Social home page Coder Social logo

rushit-tool / rushit Goto Github PK

View Code? Open in Web Editor NEW

This project forked from jsitnicki/rushit

12.0 12.0 0.0 1.66 MB

rushit is a scriptable network micro-benchmark tool for Linux

License: Apache License 2.0

Makefile 2.42% C 92.06% Lua 3.20% Shell 2.31%

rushit's People

Contributors

jsitnicki avatar qsn avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

rushit's Issues

Evaluate usefullness of how we measure CPU utilization

CPU utilization is an important metric in network benchmarks when we reach the line rate and cannot compare throughput, which is often the case with TCP stream tests. This has been the case when comparing the performance of pre- and post- Meltdown/Spectre fixes.

rushit currently measures CPU usage with getrusage() API, like iperf3 does it. We collect samples in two ways:

1. For the process - once at the start of the test run and once at the end of the test. These samples are printed in the output for the test run:

utime_start=0.019308
utime_end=0.568190
stime_start=0.000964
stime_end=7.634470

2. For each network thread, at regular intervals throughout the test run (interval set via -I option). This gets reported together with the sample dump (-A option):

$ cat samples.csv | awk -F, '{ print $10, $11 }' | column -t
utime     stime
0.081938  0.697616
0.158586  1.397109
0.235467  2.103314
0.305231  2.856134
0.379570  3.569675
0.447578  4.324109
0.526333  5.069617
0.586349  5.832823
0.655824  6.564788
0.717896  7.287646

Other tools measure it differently. For instance, netperf samples /proc/stat, while fio runs an idle thread and measures how much work it can get done. Cgroups also offer means of tracking used CPU time:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/sec-cpuacct

We should evaluate if the currently implemented CPU utilization accounting is useful for the user, and if needed change it or provide alternative methods.

Whichever method we chose, we should also inform the user if the measurement should not be relied on, like when power saving / CPU frequency scaling mechanisms are enabled.

Requested by @jbenc.

Switch to single command with subcommands CLI

Instead of having separate binaries for each type of L4 proto & workload (at the moment tcp_stream, tcp_rr, udp_stream) have one command with subcommands or options for selecting the workload type and the transport proto.

This has been suggested by @jbenc.

L4 proto selection from the script

Look into allowing users to select socket type and proto from the script. The idea being that the script would be the only input needed to orchestrate the test run. Not sure yet if it makes sense.

/cc @jbenc

Script for using IPsec at socket level

Look into if we can already make use of IPsec at the socket level, and provide an example script that does so. Otherwise identify what still needs to be done to get there.

Report throughput for stream tests in various units of measure

To be user-friendly and also on par with existing network benchmarks like netperf or iperf3, allow the user to choose the units of measure for the reported throughput. Currently only supported unit is Mbps (mega-, 1000^2, bits-per-second).

Being able to choose the unit also makes it easier for the user to determine if zero throughput is due to measurement being rounded off when converting units or was it really zero (e.g., in a case of a setup/network problem).

Mentioned existing tools support a subset of following formats for reporting thoughput/rate:

  • bits per second, power-of-10: bps, kbps, Mbps, Gbps
  • bytes per second, power-of-2: Bps, KiBps, MiBps, GiBps
  • bytes per second, power-of-10: Bps, kBps, MBps, GBps

Requested by @jtluka.

Distribute incoming connections fairly among TCP servers

TCP workloads are using reuseport groups (SO_REUSEPORT option) to balance incoming TCP connections among receiver threads.

The balancing is supposed to be fair but initial tests with per-thread throughput reporting indicates that some threads are idling while others service more incoming connections than expected. See discussion in #1 (comment).

Report throughput / TPS individually for each thread / flow

Currently, at the end of a run, we report throughput / TPS aggregated from all threads and all flows. This prevents users from reasoning about distribution of throughput / TPS among threads/flows.

Calculate througput/TPS for each flow individually and report it, in addition to printing the aggregated value as we do now.

Suggested by @olichtne.

Support for SCTP

Extend the workloads so that SCTP can be used as the transport protocol.

First goal would be to have single-stream-per-socket to match the TCP workloads.

Then see how we can fit in multi-streaming, and later multi-homing.

Simulating short lived connections

Look into how we could facilitate simulating short lived connections where sockets are retired quickly (e.g. after one write) and new ones are reopened (similar to HTTP benchmarks). The use case here is to benchmark connection tracking.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.