HttPerf

From EggeWiki
The printable version is no longer supported and may have rendering errors. Please update your browser bookmarks and please use the default browser print function instead.

First, compile httperf for your platform. It works just fine under 32 or 64 bit. Use file to verify the binary matches your platform. <geshi lang="bash"> $ file /home/ec2/bin/httperf /home/ec2/bin/httperf: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.6.8, dynamically linked (uses shared libs), not stripped $ file /usr/local/bin/httperf /usr/local/bin/httperf: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), for GNU/Linux 2.6.8, dynamically linked (uses shared libs), not stripped </geshi>

You'll want ssl compiled in. You can either check to see if the '--ssl' option is available in help, or better yet, check to see if the executable links to the ssl libaries. In Ubuntu I had to make sure I had 'libc6', 'libssl0.9.8', and 'libssl-dev' installed before compiling.

<geshi lang="bash"> $ ldd /usr/local/bin/httperf libresolv.so.2 => /lib/libresolv.so.2 (0x00002b3181afe000) libnsl.so.1 => /lib/libnsl.so.1 (0x00002b3181d13000) libssl.so.0.9.8 => /usr/lib/libssl.so.0.9.8 (0x00002b3181f2c000) libcrypto.so.0.9.8 => /usr/lib/libcrypto.so.0.9.8 (0x00002b3182175000) libm.so.6 => /lib/libm.so.6 (0x00002b31824f7000) libc.so.6 => /lib/libc.so.6 (0x00002b3182778000) libdl.so.2 => /lib/libdl.so.2 (0x00002b3182ad3000) libz.so.1 => /usr/lib/libz.so.1 (0x00002b3182cd8000) /lib64/ld-linux-x86-64.so.2 (0x00002b31818e0000) </geshi>

I ran a quick test against Amazon to get an idea of the capacity of my client machine. Looks like I can sustain about 45 conn/s and 455 req/s. <geshi lang="bash"> $ httperf --timeout=5 --client=0/1 --server=www.amazon.com --port=443 --uri=/favicon.ico --rate=60 --send-buffer=4096 \ --recv-buffer=16384 --ssl --num-conns=500 --num-calls=10 httperf --timeout=5 --client=0/1 --server=www.amazon.com --port=443 --uri=/favicon.ico --rate=60 --send-buffer=4096 --recv-buffer=16384 --ssl --num-conns=500 --num-calls=10 Maximum connect burst length: 5

Total: connections 500 requests 5000 replies 5000 test-duration 10.982 s

Connection rate: 45.5 conn/s (22.0 ms/conn, <=94 concurrent connections) Connection time [ms]: min 385.9 avg 1014.7 max 5818.1 median 503.5 stddev 1004.4 Connection time [ms]: connect 220.6 Connection length [replies/conn]: 10.000

Request rate: 455.3 req/s (2.2 ms/req) Request size [B]: 78.0

Reply rate [replies/s]: min 452.6 avg 490.6 max 528.6 stddev 53.7 (2 samples) Reply time [ms]: response 69.1 transfer 10.3 Reply size [B]: header 272.0 content 1406.0 footer 0.0 (total 1678.0) Reply status: 1xx=0 2xx=5000 3xx=0 4xx=0 5xx=0

CPU time [s]: user 1.10 system 6.54 (user 10.0% system 59.6% total 69.6%) Net I/O: 781.1 KB/s (6.4*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0 Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0 </geshi>

Cygwin

Httperf works great from Cygwin. It's not a standard package, but it's easy to build from source. The usual ./configure && make && make install works fine.

The only caveat is that you can't use the '--hog' option. This option allows httperf to use non ephemeral ports (in the range from 1024 to 5000). If you use the --hog option, you may get this error:

<geshi lang="bash"> httperf: connection failed with unexpected error 106 </geshi>

Slicehost

I ran a quick test again my own host. Unfortunately, the host I ran it from is in Australia, and is on a slow DSL line.

<geshi lang="bash"> $ httperf --timeout=5 --client=0/1 --server=www.theeggeadventure.com --port=80 --uri=/wikimedia/index.php/HttPerf --rate=60 --num-conns=10 --num-calls=10 httperf --timeout=5 --client=0/1 --server=www.theeggeadventure.com --port=80 --uri=/wikimedia/index.php/HttPerf --rate=60 --send-buffer=4096 --recv-buffer=16384 --num-conns=10 --num-calls=10 httperf: warning: open file limit > FD_SETSIZE; limiting max. # of open files to FD_SETSIZE Maximum connect burst length: 1

Total: connections 10 requests 100 replies 100 test-duration 21.612 s

Connection rate: 0.5 conn/s (2161.2 ms/conn, <=10 concurrent connections) Connection time [ms]: min 18249.0 avg 19896.4 max 21460.8 median 20134.5 stddev 1111.7 Connection time [ms]: connect 219.4 Connection length [replies/conn]: 10.000

Request rate: 4.6 req/s (216.1 ms/req) Request size [B]: 104.0

Reply rate [replies/s]: min 3.2 avg 4.6 max 6.2 stddev 1.3 (4 samples) Reply time [ms]: response 1242.7 transfer 725.0 Reply size [B]: header 728.0 content 23325.0 footer 2.0 (total 24055.0) Reply status: 1xx=0 2xx=100 3xx=0 4xx=0 5xx=0

CPU time [s]: user 2.02 system 19.58 (user 9.3% system 90.6% total 99.9%) Net I/O: 109.2 KB/s (0.9*10^6 bps)

Errors: total 0 client-timo 0 socket-timo 0 connrefused 0 connreset 0 Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0 </geshi>

Other tests

While there are some pay sites which will do stress tests, and you can do your own with EC2, you can get some good metrics from Pingdom.com and Browsershots.org. Browsershots is quite neat because a whole bunch of different browser will hit your site. You don't have control or metrics, but it's a pretty good gauge of what web users will see.