Linux stress testing tools (http_load, webbench, ab, siege)

Linux stress testing tools (http_load, webbench, ab, siege)

一,http_load

The program is very small, and after decompression it does not reach 100K.
http_load runs in parallel and repetitively, and tests the quantum and load of the web server. But most of its stress testing tools, it can run as a single process, generally will not kill the damage. You can also test HTTPS website requests.

Download link: http://soft.vpser.net/test/http_load/http_load-12mar2006.tar.gz

安装
#tar zxvf http_load-12mar2006.tar.gz
#cd http_load-12mar2006
#make && make install

Command format: http_load -p number of concurrent access processes -s access time URL files that need to be accessed The
parameters can actually be freely combined, and there is no restriction on the choice of parameters. Or you can write http_load -parallel 5 -seconds
300 urls.txt, we can give you a brief explanation of the parameters.
-Parallel abbreviation -p: The meaning is the number of concurrent user processes.
-fetches abbreviation -f: means the total number of visits
-rate abbreviation -r: means the frequency of access per second
-seconds abbreviation -s: means the total access time
Prepare URL file: urllist.txt, the file format is every Run a URL, the URL is better than 50-100 test results are better. The file format is as follows:
http://www .vpser.net / uncategorized / choose-vps.html
http://www.vpser.net/vps-cp/hypervm-tutorial.html
http://www.vpser.net/ coupons/diavps-april-coupons.html
http://www.vpser.net/security/vps-backup-web-mysql.html
For example:
http_load -p 30 -s 60 urllist.txt After
understanding the parameters, let’s look at the operation One command to see its return result
Command: %./http_load -rate 5 -seconds 10 urls shows that a test with a duration of 10 seconds was performed, and the frequency per second was 5.49 fetchs
, 2 max parallel, 289884 bytes, in 10.0148 seconds5916 mean bytes / connection4.89274
fetches / sec, 28945.5 bytes / secmsecs / connection: average 28.8932, maximum 44.243, 24.488 minutes / first
response: 63.5362 average, maximum 81.624, 57.803 minutes HTTP response code: code 200-49

Analysis of the results: 1. 49
reads, 2 maximum parallel numbers, 289884 bytes, within 10.148 seconds, it
shows that 49 requests were run in the above test, the maximum number of concurrent processes is 2, and the total transmitted data is 289884 bytes. The running time is 10.0148 seconds
2. 5916 average number of bytes/connections represents the average amount of data transmitted per connection 289884/49 = 5916
3.4.89274 extractions per second, 28945.5 bytes/second
indicates that the response request per second is 4.89274, the data transmitted per second is 28945.5 bytes/sec
4. Milliseconds/connection: average 28.8932, maximum 44.243, 24.488 minutes, indicating that the average response time per connection is 28.8932 milliseconds, the maximum response time is 44.243 milliseconds, and the minimum response time is 24.488
milliseconds 5. Milliseconds/first response: 63.5362 average value, maximum 81.624, maximum 57.803 minutes
6, HTTP response code: code 200-49 indicates the type of open response page, if there are too many types of 403, it may

Pay attention to whether the system has encountered limitations. Special
Note: The
main indicators in the test results are fetchs / sec, msecs / connect, which is the number of queries the server can respond to per second. Use this indicator to achieve the best performance. Qpt-
response number of users per second and response time, response time per connection user.
The result of the test mainly depends on these two values. Of course, only these two indicators can not complete the performance analysis. We also need to analyze the server's cpu and men to convert

Ench , webbench

webbench is a website stress testing tool under Linux, which can simulate up to 30,000 concurrent connections to test the load capacity of the website. The download address can be searched on google, I can choose one
here: http://soft.vpser.net /test/webbench/webbench-1.5.tar.gz
This program is smaller, less than 50K after decompression, ha ha

Install
#tar zxvf webbench-1.5.tar.gz #cd
webbench-1.5
#make && make install
will copy the files in the webbench generated in the current directory, and you can use it directly

Usage:
webbench -c concurrent number -t running test time URL For example
:
webbench -c 5000 -t 120 http://www.163.com

Three, ab

ab is a set of powerful testing tools
that comes with apache. Generally it comes with apache.
You can check its instructions for usage.

$ ./ab
./ab: Wrong number of parameters
Usage: ./ab [options] [http://]host name[:port] / path
options include:
-n request to execute the number of requests
-c concurrent number request to use
-t timelimit seconds reach the maximum value. Waiting for response
-p postfile file containing data to POST
-T content-type Content-type header for POSTing
-v level of detail -how much troubleshooting information to print
-w print results in HTML table
-i use HEAD instead GET
-x attribute string is inserted as table attribute
-y attribute string is inserted as tr attribute
-z attribute is inserted as string of td or th attribute
-C attribute adds cookie, for example. 'Apache = 1234. (Repeatable)
-H attribute adds any header line, for example,'Accept-Encoding: gzip' is
inserted after all normal header lines. (Repeatable)
-An attribute to add basic WWW authentication, this attribute
is a user name and password separated by a colon.
The -P attribute adds basic proxy authentication, and the attribute
is the username and password separated by a colon.
-X proxy: the proxy server and port number to be used by the port
-V print the version number and exit
-k use the HTTP KeepAlive function
-d do not display the provided percentile table.
-S Do not display confidence estimators and warnings.
-g filename outputs the collected data to a gnuplot format file.
-e file name output CSV file with percentage
-h display usage information (this message) There
are many parameters, generally we use -n and -c
such as:
./ab -c 1000 -n 100 http://www .vpser .net / index.php This means processing 1000 requests at the same time and running the index.php file 100 times.

Four, siege

An open source stress testing tool that can perform concurrent multi-user access to a WEB site according to the configuration, record the corresponding time of each user's request process, and repeat it under a certain number of concurrent accesses.
Official: HTTP: //www.joedog.org/
Siege Download: http: //soft.vpser.net/test/siege/siege-2.67.tar.gz
Unzip:

tar -zxf siege-2.67.tar.gz to
enter the decompression directory:

cd siege war 2.67/
installation:
#. /configure; to
#make installation

Use
siege -c 200 -r 10 -f example.url
-c is the amount of concurrency, -r is the number of repetitions. The url file is just a text, each line is a url, it will be randomly accessed from it.

example.url content:

http://www.licess.cn
http://www.vpser.net
http://soft.vpser.net

Explanation of results.
Server siege is lifted...complete.
Transaction: 3419263 hits // 419,263 processing completed
Availability: 100.00%// 100.00% success rate
Elapsed time: 5999.69 secs //Total time used
Data transmitted: 84273.91 MB //Total data transmission 84273.91 MB
Response time: 0.37 secs //Corresponding 1.65 seconds: display the speed of the network connection
Transaction rate: 569.91 trans / sec //Average completed 569.91 processing per second: after the server
throughput: 14.05 MB / sec //Average data transmission per second
concurrency: 213.42 //The actual highest Concurrency Number of
successful transactions: 2564081 // Number of successful transactions
Failed transactions: 11 // Number of failed transactions The
longest transaction: 29.04 //The longest time spent in each transmission The
shortest transaction: 0.00 //The shortest time spent in each transmission

Guess you like

Origin blog.csdn.net/weixin_46152207/article/details/113711073