Redis cluster performance testing tool redis-benchmark

Before using this article, for example, the cluster has a redis node on port 7001 on a certain machine

Remember to use the ip port such as the following

redis-benchmark -h 10.166.15.36 -p 7001 -n 100000 -q script load "redis.call('set','foo','1234567890')"

./redis-benchmark  -h 10.166.15.36 -p 7001  -q -n 100000 

Reprinted below: https://blog.csdn.net/youqika/article/details/42740647

https://blog.csdn.net/youqika/article/details/42740647

1 Experimental data

Redis comes with a tool called redis-benchmark to simulate N clients sending M requests at the same time. The following comparison tests were carried out in the experiment: 
(1) In quiet mode and explicitly using commands to run for comparison; 
(2) In single key and random key mode; 
(3) In default 50 clients, 100 Compare with 1 client and 10 clients; 
(4) Compare between executing commands in sequence and executing multiple commands at one time.

Experimental hardware conditions: under the virtual machine, 2 processors, 5GB memory, 20G hard disk.

Test the following commands respectively: 
(1) PING_INLINE 
(2) PING_BULK 
(3) SET: Associate the string value value to the key; 
(4) GET: Return the string value associated with the key, if the value stored in the key is not a string type , returns an error; 
(5) INCR: Increase the digital value stored in the key by one. If it cannot be converted to a number, an error will be reported; 
(6) LPUSH: insert one or more values ​​into the header of the list key; 
(7) RPUSH: insert one or more values ​​into the end of the list key; 
(8) LPOP: remove and return the head element of the list key; 
(9) RPOP: remove and return the tail element of the list key; 
(10) SADD: add one or more member elements to the set set, which already exist in the set The member element will be ignored; 
(11) SPOP: remove and return a random element in the set; 
(12) LPUSH: insert one or more values ​​into the header of the list key; 
(13) LRANGE_100: return to the list key Elements in the specified range, the first 100 elements; 
(14) LRANGE_300: Return the elements in the specified range in the list key, the first 300 elements; 
(15) LRANGE_500: Return the elements in the specified range in the list key, the first 500 elements ; 
(16) LRANGE_600: Return the elements in the specified range in the list key, the first 600 elements; 
(17) MSET: Set one or more key-value pairs at the same time, and the value is a string.

2 Experimental results

(1) ./redis-benchmark -q -n 100000 
runs in quiet mode and uses only a single key. 
write picture description here 
(2) ./redis-benchmark -n 100000 -q script load "redis.call('set', 'foo', 'bar')" 
uses direct commands to run. 
write picture description here 
(3) ./redis-benchmark -r 100000 -n 100000 -q 
runs in quiet mode and sets 100,000 random keys. 
write picture description here 
(4) By default, each client sends the next request after one request is completed. The benchmark simulates 50 clients by default, which means that the server reads the commands of each client almost sequentially. 
./redis-benchmark -c 100 -r 100000 -n 100000 -q 
simulate 100 clients 
write picture description here 
./redis-benchmark -c 10 -r 100000 -n 100000 -q 
simulate 10 clients 
write picture description here 
(5) Redis supports /topics/ Pipelining makes it possible to execute multiple commands at once. Redis pipelining can improve the TPS of the server. 
./redis-benchmark -r 100000 -n 100000 -P 16 –q 
Pipelining 16-command test. 
write picture description here

3 Experimental analysis

Focus on comparing the performance of these commands SET/GET/INCR/LPUSH/LPOP/SADD/SPOP/LRANGE_100. 
The meaning of the scene numbers 
1: single key, 50 clients; 2: random key, 50 clients; 3: random key, 100 clients; 4: random key, 10 clients; 5: random key, 50 clients, Concurrent execution. 
write picture description here 
Note: The unit of all data in the table: the number of requests per second. 
It can be seen from the above table: 
(1) For the same client, the number of requests per second for random keys, SET and LPOP decrease, GET, INCR, LPUSH, SADD, SPOP and LRANGE increase; 
(2) In the random generation In the case of key value, SET and SADD operations decrease as the number of clients increases, and the number of requests per second decreases; considering the cache hit, the change trend of other commands is irregular; 
(3) Other conditions are the same, in the case of concurrent execution, various Commands are greatly increased. 
It can be concluded from the above that in a real environment, to deal with large data and large concurrency, throughput can be improved by increasing the cache size and executing concurrently.


Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324843366&siteId=291194637