Coder Social home page Coder Social logo

httperf's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

httperf's Issues

httperf: maximum number of open descriptors = 1024

Hi,

While using this tool I faced an issue, which is given below:

httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE
httperf: maximum number of open descriptors = 1024

I have updated the open files limit to 65535 but I am still getting the same error. While searching online I found out we need to rebuild the tool from source but I have installed the tool using the apt-get because the README.md was not helpful.

Is there an another way to resolve this issue?

httperf: did not recognize arg

httperf --server=localhost --port=9081 --method=POST --uri=/bid/123 --add-header="Content-Type: application/json\n" --wsesslog=1000,100,http_perf.txt

httperf --client=0/1 --server=localhost --port=9081 --uri=/bid/123 --send-buffer=4096 --recv-buffer=16384 --add-header='Content-Type: application/json\n' --method=POST --wsesslog=1000,100.000,http_perf.txt
httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE
httperf: did not recognize arg '"ssp1-27009141-1485848975137","at":' in http_perf.txt

Contents of file http_perf.txt
contents={"id": "ssp1-27009141-1485848975137","at": 2,"imp": [{"id": "1","bidfloor": 13.521766169371157,"bidfloorcur": "RUB","banner": {"pos": 0,"h": 100,"w": 300},"secure": 0}],"site": {"id": "10930","domain": "warfiles.ru","ref": "http://warfiles.ru/show-142725-rossiya-vpervye-ispytala-noveyshuyu-aviabombu-drel-v-sirii.html","publisher": {"id": "9886"},"ext": {"place_id": 79170},"cat": ["IAB12"]},"device": {"ua": "Mozilla/5.0 (iPad; CPU OS 9_3_5 like Mac OS X) AppleWebKit/601.1.46 (KHTML, like Gecko) YaBrowser/16.11.0.2708.11 Mobile/13G36 Safari/601.1","ip": "84.234.53.206","make": "Apple","model": "iPad","os": "iOS","osv": "9.0","devicetype": 5},"user": {"id": "35e4f8a5-e897-4589-a075-bc2c8acd7e1e","buyeruid": "6331632281203562848","geo": {"type": 2,"city": "Moscow","region": "MD-BD","country": "Russia"}}}

I have tried escaping " with \  , but it did not help either .
Please advise .

--hog option should set the address family before binding the socket to prevent issues

Using httperf with --hog option can prevent it binding sockets because the 
socket family is not set.

$ strace -tt httperf --hog
...
00:29:42.668802 bind(3, {sa_family=AF_UNSPEC, 
sa_data="PY\0\0\0\0\0\0\0\0\0\0\0\0"}, 16) = -1 EAFNOSUPPORT (Address family 
not supported by protocol)
00:29:42.668839 bind(3, {sa_family=AF_UNSPEC, 
sa_data="PZ\0\0\0\0\0\0\0\0\0\0\0\0"}, 16) = -1 EAFNOSUPPORT (Address family 
not supported by protocol)
00:29:42.668877 bind(3, {sa_family=AF_UNSPEC, 
sa_data="P[\0\0\0\0\0\0\0\0\0\0\0\0"}, 16) = -1 EAFNOSUPPORT (Address family 
not supported by protocol)
00:29:42.668914 bind(3, {sa_family=AF_UNSPEC, 
sa_data="P\\\0\0\0\0\0\0\0\0\0\0\0\0"}, 16) = -1 EAFNOSUPPORT (Address family 
not supported by protocol)
00:29:42.668950 bind(3, {sa_family=AF_UNSPEC, 
sa_data="P]\0\0\0\0\0\0\0\0\0\0\0\0"}, 16) = -1 EAFNOSUPPORT (Address family 
not supported by protocol)
00:29:42.668988 bind(3, {sa_family=AF_UNSPEC, 
sa_data="P^\0\0\0\0\0\0\0\0\0\0\0\0"}, 16) = -1 EAFNOSUPPORT (Address family 
not supported by protocol)
00:29:42.669025 bind(3, {sa_family=AF_UNSPEC, 
sa_data="P_\0\0\0\0\0\0\0\0\0\0\0\0"}, 16) = -1 EAFNOSUPPORT (Address family 
not supported by protocol)
...

$ uname -a 
Linux ... 3.0.0-8-generic #10-Ubuntu SMP Fri Aug 5 23:54:15 UTC 2011 x86_64 
x86_64 x86_64 GNU/Linux

The small patch attached sets the family to AF_INET.

Original issue reported on code.google.com by [email protected] on 12 Aug 2011 at 10:33

Attachments:

does not wait for page to be produced.

I have a php site that takes 8-11 seconds to load(I know its too long don't bother telling me) but httperf seems to get the 200 and close the connection and move on, so I don't get the real performance of the page, just how fast the server can produce a 200.
I have looked over the options and don't see anything that applies.

Option to abort if there are errors



For some users, interpreting httperf's output can be difficult.

Since a large part of doing so is assuring that the test ran successfully
(i.e., there were no client-side errors), it strikes me that it would
simplify things considerably if we added a command-line flag that, when
present, would instruct httperf to abort with an appropriate complaint if;

a) total CPU is less than, say, 90%, or
b) any errors *other than* client-timeout are seen

This would make it significantly easier for operators to interpret the output.

Optionally, if the flag is present and the test run is successful, we could
suppress the CPU and error information (perhaps just leaving the total), to
make output cleaner.

Original issue reported on code.google.com by [email protected] on 16 Dec 2009 at 7:06

--server now expects a file of some sort

With the latest checkin, the behavior of --server now expects a file of some sort to get the hostnames from. The file format is not documented, and this breaks automated tools (specifically Autobench.)

I'd suggest reverting the behavior of the --server argument, and adding a new one (e.g. --server-list-file) for the new behavior.

Incompatible with OpenSSL 1.1.0

Hi,

The httperf doesn't build with new OpenSSL 1.1.0. This issue was reported in Debian here[1].

[1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=828343

The error message is:


f/httperf-0.9.0=. -fPIE -fstack-protector-strong -Wformat -Werror=format-security -DHAVE_SSL -c -o core.o core.c
core.c: In function ‘core_ssl_connect’:
core.c:803:18: warning: assignment discards ‘const’ qualifier from pointer target type [-Wdiscarded-qualifiers]
ssl_cipher = SSL_get_current_cipher (s->ssl);
^
core.c:809:14: error: dereferencing pointer to incomplete type ‘SSL_CIPHER {aka struct ssl_cipher_st}’
ssl_cipher->name, ssl_cipher->valid, ssl_cipher->id);
^~
Makefile:466: recipe for target 'core.o' failed

make[4]: *** [core.o] Error 1

This issue will break httperf in Debian if not fixed up to middle of November 2016.

Thanks a lot in advance.

Regards,

Eriberto

Wrong Results

Dear,

I have a topology and I use iPerf for my servers and run httperf in client, but httperf results is wrong!!! What is problem?

httperf --timeout=100 --client=0/1 --server=192.168.54.100 --port=80 --uri=/ --rate=10 --send-buffer=4096 --recv-buffer=16384 --ssl-protocol=auto --num-conns=100 --num-calls=1

Maximum connect burst length: 1

Total: connections 100 requests 100 replies 0 test-duration 109.923 s

Connection rate: 0.9 conn/s (1099.2 ms/conn, <=100 concurrent connections)
Connection time [ms]: min 0.0 avg 0.0 max 0.0 median 0.0 stddev 0.0
Connection time [ms]: connect 22.7
Connection length [replies/conn]: 0.000

Request rate: 0.9 req/s (1099.2 ms/req)
Request size [B]: 67.0

Reply rate [replies/s]: min 0.0 avg 0.0 max 0.0 stddev 0.0 (21 samples)
Reply time [ms]: response 0.0 transfer 0.0
Reply size [B]: header 0.0 content 0.0 footer 0.0 (total 0.0)
Reply status: 1xx=0 2xx=0 3xx=0 4xx=0 5xx=0

CPU time [s]: user 46.50 system 63.26 (user 42.3% system 57.5% total 99.8%)
Net I/O: 0.1 KB/s (0.0*10^6 bps)

Errors: total 100 client-timo 100 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

Throws invalid server when sending request to wailua or any other server

Hi All,
i followed the steps to install httperf on my local and did not see any errors but when i make a request as seen below i keep getting "invalid server address". Any thoughts how to fix this?

httperf --server wailua --port 6800
httperf --client=0/1 --server=wailua --port=6800 --uri=/ --send-buffer=4096 --recv-buffer=16384 --num-conns=1 --num-calls=1
httperf.core_addr_intern: invalid server address wailua

Output when i did make install in httperf directory:
make install
Making install in src
Making install in gen
make[3]: Nothing to be done for install-exec-am'. make[3]: Nothing to be done for install-data-am'.
Making install in lib
make[3]: Nothing to be done for install-exec-am'. make[3]: Nothing to be done for install-data-am'.
Making install in stat
make[3]: Nothing to be done for install-exec-am'. make[3]: Nothing to be done for install-data-am'.
/Users/User1/DevProjec/httpperf/httperf/install-sh -c -d '/usr/local/bin'
/bin/sh ../libtool --mode=install /usr/bin/install -c httperf '/usr/local/bin'
libtool: install: /usr/bin/install -c httperf /usr/local/bin/httperf
make[3]: Nothing to be done for install-data-am'. Making install in man make[2]: Nothing to be done for install-exec-am'.
/Users/User1/DevProjec/httpperf/httperf/install-sh -c -d '/usr/local/share/man/man1'
/usr/bin/install -c -m 644 /Users/User1/DevProjec/httpperf/httperf/man/httperf.1 /Users/User1/DevProjec/httpperf/httperf/man/idleconn.1 '/usr/local/share/man/man1'
make[2]: Nothing to be done for install-exec-am'. make[2]: Nothing to be done for install-data-am'.

Typo in manpage

What steps will reproduce the problem?
1. MANWIDTH=70 man httperf | awk '{ if(FNR == 13 || FNR == 46){ print $NL } }'

What is the expected output? What do you see instead?

Expected:
       calls N] [--num-conns N] [--period [d|u|e]T1[,T2]] [--port N]
       httperf --hog --server www --num-conns 100 --ra 10 --timeout 5


Actual:
       calls N] [--num-conns N] [--period [d|u|e]T1[,T2]] [--port N]
       httperf --hog --server www --num-conn 100 --ra 10 --timeout 5


What version of the product are you using? On what operating system?

0.9.0 on Archlinux

Please provide any additional information below.

This might be a way to fix it:
sed -i 's/conn /conns /'

Original issue reported on code.google.com by [email protected] on 5 Jun 2011 at 12:55

httperf segaults on macosx 10.7.5

What steps will reproduce the problem?
1. Run httperf as follows httperf  --timeout=10   --server=*host*  --port=80 
--wlog=y,requests_httperf_Hindi.txt --uri=/ --rate=200 --num-conns=2000 
--num-calls=1 --client=1/3 
2. outputs Segmentation fault: 11
3.

What is the expected output? What do you see instead?
I expect the test to go on but instead it segfaults.
However if I lower rate to some 100 it does not segfault.



What version of the product are you using? On what operating system?
v0.9

Please provide any additional information below.
ulimit on the host
ulimit -n
10000
uname -a
Darwin degreethem-lm 11.4.2 Darwin Kernel Version 11.4.2: Thu Aug 23 16:25:48 
PDT 2012; root:xnu-1699.32.7~1/RELEASE_X86_64 x86_64


Original issue reported on code.google.com by [email protected] on 19 Feb 2013 at 1:02

Missing tags on previous versions

Hello,

This software was released (Archlinux is at 0.9). The current git repository lack of tags for past released version.

This will help to build a released version from the git.

As a side question, where can we find tarballs nowadays?

endless loop on EV_HOSTNAME_LOOKUP_START

What steps will reproduce the problem?
1. use httperf from a cloud of amazon EC2 instances on Debian, with autobench
2.
3.

What is the expected output? What do you see instead?

each time I run my autobench tests, at least one instance of httperf gets stuck 
into an endless loop. debug enabled, here is the result :
every 5 seconds it fires these 2 lines

event_signal: EV_HOSTNAME_LOOKUP_START (obj=(nil),arg=2b80e9e8)
timer_schedule: t=0x81ca5f8, delay=5s, subject=0

thanks for your help,
sam

Original issue reported on code.google.com by [email protected] on 21 Jun 2012 at 12:21

Possibility to choose TLS library

What?

  • httperf could be useful to test TLS performance
  • Currently httperf uses OpenSSL. OpenSSL and BoringSSL have very similar API. Maybe it could be possible to have compile time switch allowing to use both libraries

Why?

  • BoringSSL offers some other, different functionalities. Recently it has added support for 2 post-quantum key exchange algorithms. httperf could be used to measure differences between classical key exchange and post-quantum key exchange algorithms
  • PoC of BoringSSL integration + some additional features (like possibility to specify key exchange algorithm to use in TLSv1.3) can be found here:
    https://github.com/post-quantum-cryptography/httperf/tree/pq

Questions:

  • Is work in PoC interesting? Does it make sense to upstream changes?
  • Is httperf team interested in maintaining support for 2 TLS libraries?

Error while running "autoreconf -i"

I have run in to the following issue when running autoreconf -i in the source directory.

aclocal: warning: couldn't open directory 'm4': No such file or directory
configure.ac:21: error: possibly undefined macro: AC_PROG_LIBTOOL
      If this token and others are legitimate, please use m4_pattern_allow.
      See the Autoconf documentation.
autoreconf: /usr/bin/autoconf failed with exit status: 1

Operating system: Linux Mint 17.3

new releases

Where is it being releases new version? I didn't find any download package on sf.net.

Error: Use of uninitialized value in concatenation

I installed httperf on my MBP running Mac OSX 10.6.4
Installed httperf and autobench. When I executed the command:

autobench --single_host --host1 google.com --uri1 / --low_rate 5 --high_rate 20 
--rate_step 10 --num_call 10

Below is the error result:

Zero replies received, test invalid: rate 15
Use of uninitialized value in concatenation (.) or string at 
/usr/local/bin/autobench line 308.
Use of uninitialized value in concatenation (.) or string at 
/usr/local/bin/autobench line 308.
Use of uninitialized value in concatenation (.) or string at 
/usr/local/bin/autobench line 308.
Use of uninitialized value in concatenation (.) or string at 
/usr/local/bin/autobench line 308.
Use of uninitialized value in concatenation (.) or string at 
/usr/local/bin/autobench line 308.
Use of uninitialized value in concatenation (.) or string at 
/usr/local/bin/autobench line 308.
Use of uninitialized value in concatenation (.) or string at 
/usr/local/bin/autobench line 308.
Use of uninitialized value in concatenation (.) or string at 
/usr/local/bin/autobench line 308.

Original issue reported on code.google.com by [email protected] on 20 Dec 2010 at 4:02

repeating content in --print-reply

If you specify --print-reply and the response body is large, the headers
will print repeatedly as the content streams out.

I think this is because print_reply() is called every time send_raw_data fires.

E.g.,

SH0:PUT /test.jpg HTTP/1.1
SH0:Host: www.example.net
SH0:Content-Type: image/jpeg
SH0:Content-Length: 55672
SH0:User-Agent: httperf/0.9.0
SH0:
SS0: header 148 content 55672
SH0:PUT /test.jpg HTTP/1.1
SH0:Host: www.example.net
SH0:Content-Type: image/jpeg
SH0:Content-Length: 55672
SH0:User-Agent: httperf/0.9.0
SH0:
SS0: header 148 content 51476
SH0:PUT /test.jpg HTTP/1.1
SH0:Host: www.example.net
SH0:Content-Type: image/jpeg
SH0:Content-Length: 55672
SH0:User-Agent: httperf/0.9.0
SH0:

Original issue reported on code.google.com by [email protected] on 16 Dec 2009 at 7:13

Connection: close not honoured

httperf does not recognize "Connection: close" headers, it assumes Web 
servers that claim to be "HTTP/1.1" to support persistent connections.
Likewise, it assumes servers that claim to be "HTTP/1.0" do not support
persistent connections.

From RFC2616 section 14.10;

HTTP/1.1 defines the "close" connection option for the sender to signal
that the connection will be closed after completion of the response. For
example, Connection: close in either the request or the response header
fields indicates that the connection SHOULD NOT be considered `persistent'
(section 8.1) after the current request/response is complete. HTTP/1.1
applications that do not support persistent connections MUST include the
"close" connection option in every message.

Original issue reported on code.google.com by [email protected] on 16 Dec 2009 at 7:10

Large POST (>10000 chars) gets truncated

We've noticed that feeding httperf with POST containing a huge amount of characters (well over 10000) Reply Status on 4xx and 5xx with the contents getting truncated originating GETs.

Upon investigation we tracked the issue to the POST size.

A similar issue was reported in http://www.hpl.hp.com/hosted/linux/mail-archives/httperf/2011-May/000731.html in 2011.

To fix the issue we've recompiled httperf with the following patch:
--- httperf-master-pristine/src/gen/wsesslog.c 2015-07-16 15:44:02.000000000 +0100
+++ httperf-master/src/gen/wsesslog.c 2015-09-01 17:24:49.388360243 +0100
@@ -388,11 +388,11 @@
FILE fp;
int lineno, i, reqnum;
Sess_Private_Data *sptr;
- char line[10000]; /
some uri's get pretty long /
- char uri[10000]; /
some uri's get pretty long /
+ char line[500000]; /
some uri's get pretty long /
+ char uri[500000]; /
some uri's get pretty long */
char method_str[1000];
- char this_arg[10000];
- char contents[10000];
+ char this_arg[500000];
+ char contents[500000];
double think_time;
int bytes_read;
REQ *reqptr;

httperf hangs if num-calls not divisible by burst-length

What steps will reproduce the problem?
1. Run `httperf --server=google.com --uri=/ --num-calls=3 --burst-length=2 -v`

What is the expected output? What do you see instead?
Very quick exit. Actual result is that httperf hangs.

What version of the product are you using? On what operating system?
Tested on 0.9.0

Original issue reported on code.google.com by [email protected] on 18 Feb 2014 at 3:58

Extending SSL support

Currently httperf v0.9.0 does not support SSL client certificate
authentication, server certificate verification, or choosing the SSL
protocol version.  This patch adds the following:

1) Client SSL certificate authentication
   - add --ssl-certificate option (required argument file name) to specify
the certificate
   - add --ssl-key option (required argument file name) to specify the key
   - loads certificate and key into the SSL context and performs a
consistency check
2) Server certificate verification
   - add --ssl-verify option (optional arguments "no" and "yes") to control
verification (off by default)
   - add --ssl-ca-file and --ssl-ca-path options (required arguments file
and path names) to set custom certificate authority file and path
   - load system default certificate path (e.g., /etc/ssl/certs)
3) Choose SSL protocol version
   - add --ssl-protocol option (required argument of "auto", "SSLv2",
"SSLv3", or "TLSv1") to choose protocol; "auto" (default) chooses
SSLv23_client_method

The patch modifies httperf.h and httperf.c, adding the appropriate option
processing and OpenSSL code, as well as reporting these options in help
text (--help) and the command line summary when running the program.  It
has been tested on MacOS X (10.4), Ubuntu 9.04, and Solaris 10 x86 using
gcc3.  It is known to operate properly against Apache 2.2/mod_ssl, pound,
and Ruby WEBrick servers.

Note: this changes the default SSL protocol from SSLv3_client_method to
SSLv23_client_method.  Other new options (e.g. --ssl-verify) are off by
default.

Original issue reported on code.google.com by [email protected] on 6 May 2010 at 10:41

Attachments:

build error

[root@8c0a-0002 build]# /alidata01/httperf/configure.ac
/alidata01/httperf/configure.ac: line 4: syntax error near unexpected token 2.60' /alidata01/httperf/configure.ac: line 4: AC_PREREQ(2.60)'

--no-host-hdr option doesn't work with --add-header

For my testing, I need to set my own Host header, so I disable the
auto-generated Host header with --no-host-hdr and add in my own Host header
among other additional headers I need with the --add-header option.

With that, the auto-generated Host header is gone, but left with an empty
line "\r\n" (see the output with the empty line below), which with another
"\r\n" signals end of header section in HTTP. As a result, my own
additional headers are treated as the body of the HTTP request and thus
ignored.

SH0:GET /folder/hello.txt HTTP/1.1
SH0:User-Agent: httperf/0.9.0
SH0:
SH0:My-Header: some text
SH0:Host: www.mydomain.com
SH0:

Original issue reported on code.google.com by [email protected] on 16 Dec 2009 at 7:08

Install instructions in the README are incorrect

The README says:

    $ mkdir build
    $ cd build
    $ SRCDIR/configure
    $ make
    $ make install

But there is no script called configure.
There is a script called configure.ac, it says at the top to use autoconf... somehow.

Could the instructions please be updated so that someone with no experience can easily compile the program without having to look up how to use autoconf, etc.?

Please make a release

It is much easier for package/port maintainers if there are officially released versions. Could you please tag a 1.0 (or whatever version you prefer) release?

*** buffer overflow detected ***: httperf terminated

I tried the following test, but I was getting a lot of fd-unavail errors. 

httperf --server myIP --port 80 --num-conns 90000 --rate 1000 --method GET 
--uri myURL


To fix that, I increased the number of FD_SETSIZE on 
/etc/include/bits/typesizes.h to 65535.


Re-running the same test, I get a buffer overflow detected error. 

*** buffer overflow detected ***: httperf terminated
======= Backtrace: =========
/lib/x86_64-linux-gnu/libc.so.6(__fortify_fail+0x37)[0x7f9228bf3807]
/lib/x86_64-linux-gnu/libc.so.6(+0x109700)[0x7f9228bf2700]
/lib/x86_64-linux-gnu/libc.so.6(+0x10a7be)[0x7f9228bf37be]
httperf[0x404054]
httperf[0x404e9f]
httperf[0x406943]
httperf[0x406bc1]
httperf[0x40638f]
httperf[0x405057]
httperf[0x40285e]
/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xed)[0x7f9228b0a76d]
httperf[0x4038f1]
======= Memory map: ========
00400000-00410000 r-xp 00000000 08:12 6424320                            
/usr/lo      cal/bin/httperf
0060f000-00610000 r--p 0000f000 08:12 6424320                            
/usr/lo      cal/bin/httperf
00610000-00611000 rw-p 00010000 08:12 6424320                            
/usr/lo      cal/bin/httperf
00611000-006a4000 rw-p 00000000 00:00 0
024bd000-02628000 rw-p 00000000 00:00 0                                  [heap]
7f92284b8000-7f92284cd000 r-xp 00000000 08:12 1969821                    
/lib/x8      6_64-linux-gnu/libgcc_s.so.1
7f92284cd000-7f92286cc000 ---p 00015000 08:12 1969821                    
/lib/x8      6_64-linux-gnu/libgcc_s.so.1
7f92286cc000-7f92286cd000 r--p 00014000 08:12 1969821                    
/lib/x8      6_64-linux-gnu/libgcc_s.so.1
7f92286cd000-7f92286ce000 rw-p 00015000 08:12 1969821                    
/lib/x8      6_64-linux-gnu/libgcc_s.so.1
7f92286ce000-7f92286e4000 r-xp 00000000 08:12 1969911                    
/lib/x8      6_64-linux-gnu/libz.so.1.2.3.4
7f92286e4000-7f92288e3000 ---p 00016000 08:12 1969911                    
/lib/x8      6_64-linux-gnu/libz.so.1.2.3.4
7f92288e3000-7f92288e4000 r--p 00015000 08:12 1969911                    
/lib/x8      6_64-linux-gnu/libz.so.1.2.3.4
7f92288e4000-7f92288e5000 rw-p 00016000 08:12 1969911                    
/lib/x8      6_64-linux-gnu/libz.so.1.2.3.4
7f92288e5000-7f92288e7000 r-xp 00000000 08:12 1969813                    
/lib/x8      6_64-linux-gnu/libdl-2.15.so
7f92288e7000-7f9228ae7000 ---p 00002000 08:12 1969813                    
/lib/x8      6_64-linux-gnu/libdl-2.15.so
7f9228ae7000-7f9228ae8000 r--p 00002000 08:12 1969813                    
/lib/x8      6_64-linux-gnu/libdl-2.15.so
7f9228ae8000-7f9228ae9000 rw-p 00003000 08:12 1969813                    
/lib/x8      6_64-linux-gnu/libdl-2.15.so
7f9228ae9000-7f9228c9e000 r-xp 00000000 08:12 1969800                    
/lib/x8      6_64-linux-gnu/libc-2.15.so
7f9228c9e000-7f9228e9d000 ---p 001b5000 08:12 1969800                    
/lib/x8      6_64-linux-gnu/libc-2.15.so
7f9228e9d000-7f9228ea1000 r--p 001b4000 08:12 1969800                    
/lib/x8      6_64-linux-gnu/libc-2.15.so
7f9228ea1000-7f9228ea3000 rw-p 001b8000 08:12 1969800                    
/lib/x8      6_64-linux-gnu/libc-2.15.so
7f9228ea3000-7f9228ea8000 rw-p 00000000 00:00 0
7f9228ea8000-7f9228fa3000 r-xp 00000000 08:12 1969832                    
/lib/x8      6_64-linux-gnu/libm-2.15.so
7f9228fa3000-7f92291a2000 ---p 000fb000 08:12 1969832                    
/lib/x8      6_64-linux-gnu/libm-2.15.so
7f92291a2000-7f92291a3000 r--p 000fa000 08:12 1969832                    
/lib/x8      6_64-linux-gnu/libm-2.15.so
7f92291a3000-7f92291a4000 rw-p 000fb000 08:12 1969832                    
/lib/x8      6_64-linux-gnu/libm-2.15.so
7f92291a4000-7f9229355000 r-xp 00000000 08:12 1966085                    
/lib/x8      6_64-linux-gnu/libcrypto.so.1.0.0
7f9229355000-7f9229555000 ---p 001b1000 08:12 1966085                    
/lib/x8      6_64-linux-gnu/libcrypto.so.1.0.0
7f9229555000-7f9229570000 r--p 001b1000 08:12 1966085                    
/lib/x8      6_64-linux-gnu/libcrypto.so.1.0.0
7f9229570000-7f922957b000 rw-p 001cc000 08:12 1966085                    
/lib/x8      6_64-linux-gnu/libcrypto.so.1.0.0
7f922957b000-7f922957f000 rw-p 00000000 00:00 0
7f922957f000-7f92295d3000 r-xp 00000000 08:12 1966092                    
/lib/x8      6_64-linux-gnu/libssl.so.1.0.0
7f92295d3000-7f92297d3000 ---p 00054000 08:12 1966092                    
/lib/x8      6_64-linux-gnu/libssl.so.1.0.0
7f92297d3000-7f92297d6000 r--p 00054000 08:12 1966092                    
/lib/x8      6_64-linux-gnu/libssl.so.1.0.0
7f92297d6000-7f92297dc000 rw-p 00057000 08:12 1966092                    
/lib/x8      6_64-linux-gnu/libssl.so.1.0.0
7f92297dc000-7f92297dd000 rw-p 00000000 00:00 0
7f92297dd000-7f92297ff000 r-xp 00000000 08:12 1969780                    
/lib/x8      6_64-linux-gnu/ld-2.15.so
7f92299e7000-7f92299ec000 rw-p 00000000 00:00 0
7f92299fb000-7f92299ff000 rw-p 00000000 00:00 0
7f92299ff000-7f9229a00000 r--p 00022000 08:12 1969780                    
/lib/x8      6_64-linux-gnu/ld-2.15.so
7f9229a00000-7f9229a02000 rw-p 00023000 08:12 1969780                    
/lib/x8      6_64-linux-gnu/ld-2.15.so
7fff70a79000-7fff70a9a000 rw-p 00000000 00:00 0                          [stack]
7fff70bff000-7fff70c00000 r-xp 00000000 00:00 0                          [vdso]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0                  
[vsysca      ll]

I'm using httperf 0.9.0 downloaded from this site on Ubuntu 12.04 LTS 64 bits.


If I set the FD_SETSIZE to 1024, the error disappear (but I get again lots of 
fd-unavails).

To set the FD_SETSIZE I followed these guides, but without any improvement.

http://aunixlover.blogspot.com/2012/02/how-to-configure-clients-and-servers.html

http://www.lkj.net/2011/07/increasing-httperf-maximum-number-of-open-descriptors
-from-1024-to-65535/#comment-3888

Original issue reported on code.google.com by [email protected] on 26 Aug 2013 at 2:08

Add headers to sessions in wsesslog

The wsesslog file has no option to add http headers to individual requests. 
Attached is a patch that will allow up to 16k of arbitrary headers to be added.

Example usage: 
# session 1 definition (this is a comment)
/foo.html think=2.0 headers='Cookie: CO=blahblahblah;'
      /pict1.gif
      /pict2.gif
/foo2.html method=POST contents='Post data' headers='Cookie: CO=foo;'
      /pict3.gif
      /pict4.gif

Based on the patch from 
http://www.overset.com/2008/03/27/load-test-ajax-applications-with-httperf/ but 
with modest improvements.

Original issue reported on code.google.com by [email protected] on 19 Jul 2011 at 9:52

Attachments:

Requests in Keep-Alive connections are not ACKed correctly

I've searched for keep-alive issues, but there is none, so might still not be fixed since my Linux distribution version httperf 0.9.1...

Please have a look at my comment here...
rwf2/Rocket#2062 (comment)

To summarize: In httperf it depends on the receive-buffer size how and when received data segments in a keep-alive session are ACKed. And mostly this is buggy and just waits around. But please see above comment for examples and explanation.

Issues with TLS

There are two big issues with using TLS.

  1. buf in do_recv() needs to be at least 16kB since TLS records can be up to 16kB, otherwise it leads to the end of a reply only delivered when the connection is closes (many seconds later), which results in extremely bad performance.

  2. with TLS 1.1 or 1.2 the first request sent to a server is complete garbage / random bits. Further requests to same server work. Not sure what precisely the issue is but with a more "atomic" SSL_connect the problem is gone, i.e. the following replacement code for the connect fixes the issue on my system:

      while ((ssl_err = SSL_connect(s->ssl)) == -1) {
                int reason = SSL_get_error(s->ssl, ssl_err);
                if (reason != SSL_ERROR_WANT_READ &&
                    reason != SSL_ERROR_WANT_WRITE) {
                        fprintf(stderr,
                            "%s: failed to connect to SSL server (err=%d, reason=%d)\n",
                            prog_name, ssl_err, reason);
                        ERR_print_errors_fp(stderr);
                        exit(-1);
                }
        }

Using --wlog file causes "httperf.parse_status_line: invalid status line" error

What steps will reproduce the problem?
1. Create a wlog uri file with a forward slash in it
2. Issue a single request against yahoo.com (as an example only):
httperf --server=www.yahoo.com --wlog=n,uri.txt --rate=1 --num-conns=1 
--num-calls=1  --print-request --print-reply=header

What is the expected output? What do you see instead?
I would expect to see results, but instead I see the above error

What version of the product are you using? On what operating system?
0.9.0, ubuntu 10

Please provide any additional information below.
When issuing the above command, the request headers have the "HTTP/1.1" on a 
second line as opposed to the first line.  The web server returns no status 
lines when this happens.

If you issue the same test with the --uri parameter instead of --wlog:
httperf --server=www.yahoo.com --uri=/ --num-conns=1 --num-calls=1 rate=1 
--print-request --print-reply=header

then the test succeeds as expected.  Here is the comparison between the request 
headers between the two requests:

SH0:GET /
SH0: HTTP/1.1
SH0:User-Agent: httperf/0.9.0
SH0:Host: www.yahoo.com
SH0:
SS0: header 67 content 0


SH0:GET / HTTP/1.1
SH0:User-Agent: httperf/0.9.0
SH0:Host: www.yahoo.com
SH0:
SS0: header 66 content 0

and the difference in the reply headers, the first causing the error:

RH0:<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01//EN"
RH0:   "http://www.w3.org/TR/html4/strict.dtd">
RH0:<html lang="en-US" class="y-fp-bg y-fp-pg-grad  bkt701">


RH0:HTTP/1.1 200 OK
RH0:Date: Thu, 13 Jan 2011 20:12:28 GMT
...

Original issue reported on code.google.com by [email protected] on 13 Jan 2011 at 8:29

enormous number of CLOSE_WAIT

-- What steps will reproduce the problem?
run the following command:
httperf --hog --rate=500 --num-conns=200000 --timeout=5 --uri=/img/default.jpeg 
--num-calls=10 --server=svr-web

-- What is the expected output? What do you see instead?

The test shall be finished within about 400 seconds (200000 / 500), but it get 
stuck.   After a while, I found that there is no network traffic from client to 
server, and netstat shows that there are more than 1000 sockets in CLOSE_WAIT 
state.   Note that if I use --num-conns with a smaller number, e.g. 20000, 
there is no problem at all.  I guess this is a problem in httperf on handling 
socket close operation?

-- What version of the product are you using? On what operating system?

Ubuntu 10.04 (both server and client) on Q8400 with 4G memory.   Note that the 
server is also running on a desktop (gnome) version of ubuntu, not server 
edition.  Also I have tuned file descriptor limit on the client and compiled 
httperf on it.

Original issue reported on code.google.com by [email protected] on 12 Aug 2010 at 3:02

--num-calls not recognized with --wsesslog

When I specify --num-calls, it does not get used in conjunction with --wsesslog:

root@host:web# httperf --hog --num-calls 10000 --port 3000 --server localhost --wsesslog=100000,0,file
httperf --hog --client=0/1 --server=localhost --port=3000 --uri=/ --send-buffer=4096 --recv-buffer=16384 --wsesslog=100000,0.000,file

--add-header does not work

[root@8c0a-0002 ~]# httperf --method=GET --timeout=5 --client=0/1 --server=xxxx.com --port=80 --uri=/pad/v1.0/home/main --add-header="Authorization: d268bb2e-ba5b-446a-bdd8-e065e40a5175\n" --debug=100 --print-request=header --print-reply=body --rate=5 --num-conns=1 --num-calls=1
httperf: sorry, need to recompile with -DDEBUG on...
httperf --print-reply=body --print-request=header --timeout=5 --client=0/1 --server=xxxx.com --port=80 --uri=/pad/v1.0/home/main --rate=5 --send-buffer=4096 --recv-buffer=16384 --ssl-protocol=auto --add-header='Authorization: d268bb2e-ba5b-446a-bdd8-e065e40a5175\n' --method=GET --num-conns=1 --num-calls=1
SH0:GET /pad/v1.0/home/main HTTP/1.1
SH0:User-Agent: httperf/0.9.1
SH0:Host: xxx.com
SH0:Authorization: d268bb2e-ba5b-446a-bdd8-e065e40a5175
SH0:
SS0: header 149 content 0
RB0:{"code":null,"business_exception":false,"http":{"method":"get","request_id":"1593857640775","url":"http://xxx.com/pad/v1.0/home/main"},"http_status":"INTERNAL_SERVER_ERROR","message":null,"status":null}
RS0: header 167 content 220 footer 2
Maximum connect burst length: 0

Total: connections 1 requests 1 replies 1 test-duration 0.099 s

Connection rate: 10.1 conn/s (99.3 ms/conn, <=1 concurrent connections)
Connection time [ms]: min 99.3 avg 99.3 max 99.3 median 99.5 stddev 0.0
Connection time [ms]: connect 48.7
Connection length [replies/conn]: 1.000

Request rate: 10.1 req/s (99.3 ms/req)
Request size [B]: 149.0

Reply rate [replies/s]: min 0.0 avg 0.0 max 0.0 stddev 0.0 (0 samples)
Reply time [ms]: response 50.7 transfer 0.0
Reply size [B]: header 167.0 content 220.0 footer 2.0 (total 389.0)
Reply status: 1xx=0 2xx=0 3xx=0 4xx=0 5xx=1

CPU time [s]: user 0.06 system 0.04 (user 61.2% system 36.1% total 97.3%)
Net I/O: 5.3 KB/s (0.0*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0

Contribution

My name is Christopher Torres, I'm an undergraduate student at the
University of Puerto at Bayamon. Currently I'm involved on an undergraduate
research initiative lead by Dr. Juan Sola Sloan
(http://www.uprb.edu/profesor/jsola/) working on httperf. Our current goal
is to provide functionality that allows dumping test results into a file
for further performance analysis. Another fellow student is working on a
tool that uses this file as input and generates some cool graphs in a
webpage. How can we contribute to the whole httperf project with our findings?

Thank you for your time,
Christopher Torres

xerrot at gmail dot com

Original issue reported on code.google.com by [email protected] on 7 Apr 2010 at 3:39

httperf: failed to increase number of open file limit: Invalid argument

Compiled master on OSX (10.11.2).

Had a few warnings when compiling.

All attempts at executing httperf result in error: httperf: failed to increase number of open file limit: Invalid argument.

Source code suggest the following operation is causing the error?

if (setrlimit (RLIMIT_NOFILE, &rlimit) < 0)

$ uname -a
Darwin 15.2.0 Darwin Kernel Version 15.2.0: Fri Nov 13 19:56:56 PST 2015; root:xnu-3248.20.55~2/RELEASE_X86_64 x86_64

$ulimit -n
256

error: ‘SSL_OP_NO_TLSv1_3’ undeclared (first use in this function)

I compiled to find this error, how can I solve it:

make all-recursive make[1]: Entering directory/vagrant/packages/httperf-master'
Making all in src
make[2]: Entering directory /vagrant/packages/httperf-master/src' Making all in gen make[3]: Entering directory /vagrant/packages/httperf-master/src/gen'
make[3]: Nothing to be done for all'. make[3]: Leaving directory /vagrant/packages/httperf-master/src/gen'
Making all in lib
make[3]: Entering directory /vagrant/packages/httperf-master/src/lib' make[3]: Nothing to be done for all'.
make[3]: Leaving directory /vagrant/packages/httperf-master/src/lib' Making all in stat make[3]: Entering directory /vagrant/packages/httperf-master/src/stat'
make[3]: Nothing to be done for all'. make[3]: Leaving directory /vagrant/packages/httperf-master/src/stat'
make[3]: Entering directory /vagrant/packages/httperf-master/src' gcc -std=gnu99 -DHAVE_CONFIG_H -I. -I.. -I. -I./gen -I./lib -I./stat -g -O2 -DHAVE_SSL -MT httperf.o -MD -MP -MF .deps/httperf.Tpo -c -o httperf.o httperf.c httperf.c: In function ‘main’: httperf.c:1032:126: error: ‘SSL_OP_NO_TLSv1_3’ undeclared (first use in this function) SSL_CTX_set_options(ssl_ctx, SSL_OP_NO_SSLv3 | SSL_OP_NO_TLSv1 | SSL_OP_NO_TLSv1_1 | SSL_OP_NO_TLSv1_2 | SSL_OP_NO_TLSv1_3); ^ httperf.c:1032:126: note: each undeclared identifier is reported only once for each function it appears in make[3]: *** [httperf.o] Error 1 make[3]: Leaving directory /vagrant/packages/httperf-master/src'
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory /vagrant/packages/httperf-master/src' make[1]: *** [all-recursive] Error 1 make[1]: Leaving directory /vagrant/packages/httperf-master'
make: *** [all] Error 2
`

Default server and URI

I don't think it makes sense to assume that the server is "localhost" and
the URI is "/" if they're not supplied on the command line; perf testing
the same box as you run the clients on is bad practice.

Instead, these arguments (or a suitable logfile generator) should be
required, and httperf should fail if it can't find an explicit
specification of the test URI.

Original issue reported on code.google.com by [email protected] on 16 Dec 2009 at 7:15

User-Agent is static

httperf will always emit the httperf user-agent, even when set explicitly
in arguments. This makes it difficult to test when UA is important to the
server.

Two possible fixes;

1) make it possible to explicitly override the UA on the command line. It
would probably also be necessary to allow logs (e.g., wlog) to override it.

2) omit the UA, unless it's specified.

#2 is probably easier, but may cause problems if people don't remember to
set a UA.

Original issue reported on code.google.com by [email protected] on 16 Dec 2009 at 7:02

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.