-
high speed
-
lightweight
-
Maintain persistent server connections
-
The number of connections to cache the redis database with low consumption in the background
-
Enabling request and response pipelines
-
Support proxying to multiple servers
-
Supports multiple server pools simultaneously
-
Multiple servers automatically share data
-
Implemented the complete memcached ascii and redis protocols.
-
The server pool configuration is simple, through a YAML file
-
Support for multiple hashing modes, including consistent hashing and distribution
-
Configurable to disable a node on failure
-
Monitor port observability through statistics
-
Supports Linux, *BSD, OS X and Solaris (SmartOS)
wget https://github.com/twitter/twemproxy/archive/master.zip unzip master.zip cd twemproxy autoreconf -fvi ./configure make make install
redis1: listen: 127.0.0.1:8000 #Listening port redis: true #Is it the proxy of Redis hash: fnv1a_64 #Supported hash algorithms distribution: ketama #specific algorithm auto_eject_hosts: true #Whether the node is temporarily removed when the node cannot respond timeout: 4000 #timeout time (milliseconds) server_retry_timeout: 2000 #Retry time (milliseconds) server_failure_limit: 3 #How many times the node fails to be removed servers: #The following represents all Redis nodes (IP: port number: weight) - 127.0.0.1:9001:1 - 127.0.0.1:9002:1 - 127.0.0.1:9003:1
(4), start twemproxy
src/nutcracker -d conf/nutcracker.yml
(5), use redis-cli test
During the test, it was found that when viewing the aof files of 9001, 9002, and 9003, different data will be allocated to the port redis, which is also based on the consistent hash algorithm, and the data will be allocated to different nodes. It is also the realization of data sharding.
Second, nutcracker usage and command options
Options:
-h, --help : View help documentation, display command options
-V, --version : View nutcracker version
-t, --test-conf : Test the correctness of the configure script
-d, --daemonize : Run as a daemon
-D , --describe-stats : print status description
-v, --verbosity=N : set log level (default: 5, min: 0, max: 11)
-o, --output=S : set log output path, default is standard error Output (default: stderr)
-c, --conf-file=S : specify the configuration file path (default: conf/nutcracker.yml)
-s, --stats-port=N : set the status monitoring port, the default is 22222 (default: 22222 )
-a, --stats-addr=S : set status monitoring IP, default 0.0.0.0 (default: 0.0.0.0)
-i, --stats-interval=N : set status aggregation interval (default: 30000 msec)
-p, --pid-file=S : Specify the process pid file path, default off (default: off)
-m, --mbuf-size=N : set mbuf block size in bytes (default: 16384 bytes)
After configuring the yml file
nutcracker -t nutcracker.yml can detect whether the configuration file is correct
3. Disadvantages of twemproxy
- Although the node can be removed dynamically, the data of the removed node is lost.
- When the redis cluster dynamically adds nodes, twemproxy will not redistribute the existing data. The author in the maillist said that this needs to be implemented by writing a script.
- Performance loss (in fact, as a proxy, there must be loss, twemproxy loss belongs to a very small level)
- Operations on multiple values are not supported, such as taking the subintersection of sets and complementing them (except for MGET and DEL)
- Redis transaction operations are not supported
- The error message is not perfect
- Among the shortcomings of twemproxy, the unavailable node is dynamically removed, but the data of the node is lost. This disadvantage is the most fatal, resulting in the loss of A in the CAP and the loss of availability. It is best to use a slave node behind each node. Keepalived+VIP, can fail to drift, and the slave node is automatically upgraded to the master node. High availability is guaranteed.
- Performance will be slightly lost, since it is a proxy, there is no way to avoid it.
- It is not possible to dynamically add nodes or delete nodes. This has to be implemented by the operation and maintenance itself, and the cluster is restarted. However, it is best to restart the operation itself and allocate a new cluster.