Redis java client Jedis implements connection pool + simple load balancing

1. Install the redis service under Windows
2. Jedis implements connection pool + simple load balancing

1. Download the redis_win_2.6.13.zip installation package

Download address: everyone go to Baidu

2. After decompressing the redis_win_2.6.13.zip installation package, enter The directory where redis-server.exe is located

In this directory, create a new configuration file: redis01.conf [the file name here is not fixed], the content of the file is as follows: #Whether
to run as a background process
daemonize yes #Specify
 
the pid file of the background process Write location
pidfile /var/run/redis.pid #Listening
 
port, the default is 6379
port 6379 #Only
 
accept the following bound IP requests
bind 127.0.0.1 #Set the
 
unix socket, the default is empty, and does not pass the unix socket Connect to monitor
# unixsocket /tmp/redis.sock
# unixsocketperm 755 #How
 
long the client is idle, close the link. 0 means do not close
timeout 5
 
# TCP keepalive.
# If it is non-zero, when the connection is lost, SO_KEEPALIVE will be used to send TCP ACKs to the client.
# This parameter has two functions:
# 1. Detect breakpoints.
# 2. From the point of view of the network intermediate device, it is to keep the link
# On Linux, the set time is the period of sending ACKs.
# Note: The link will be closed only after double the set time. On other cores, the cycle depends on the kernel setting.
# A reasonable value is 60s
tcp-keepalive 0
 
# Specify the log level, and the following log information will decrease in turn
# debug is used for development/testing
# verbose is not as detailed as debug
# notice is suitable for production lines # warning only records very important information
loglevel notice #Log
 
file name, if it is stdout, it will be output to the standard output, if it is running as a background process, no log will be generated
logfile C:/Users/michael/Desktop/file/work/data/redis/logs/redis.log
 
# Required To enable the system logger, set the option to yes
# syslog-enabled no
 
# Indicate the syslog identity
# syslog-ident redis
 
# Indicate the syslog device. Must be a user or one of local0 ~ local7
# syslog-facility local0 #Set
 
the number of databases, the first database number is: 0
databases 16
 
##############Snapshot#### #############
 
#Under what conditions to save the database to disk, there can be many conditions, and a careful snapshot will be taken if any condition is met #There
is a key change within 900 seconds
save 900 1 #Within
300 seconds, there are 10 key changes
save 300 10 #There
are 10000 key changes within 60 seconds
save 60 10000 #When
 
persistence fails, whether to continue to provide services
stop-writes-on-bgsave-error yes #When
 
writing to disk, whether to use the LZF algorithm Compressed data, the default is yes
rdbcompression yes
 
#Whether to add CRC64 checksum to the end of each file -- take time to ensure security
rdbchecksum yes #Save
 
name of the database on disk
dbfilename dump.rdb
 
# Redis working directory, the above database saves files and AOF logs will be written to this directory
dir C:/Users/michael/Desktop/file/work/data/redis/01/
 
##############sync########## #######
 
#Master-slave replication, configure when the machine is a slave
# slaveof <masterip> <masterport> #Configure
 
when the host requires password authentication
# masterauth <master-password>
 
# When the slave and master lose the link, or are in the process of synchronization. Whether to respond to client requests
# Set to yes to indicate response
# Set to no, directly return "SYNC with master in progress" (in synchronization with the master server)
slave-serve-stale-data yes
 
# Set whether the slave is read-only.
# Note: Even if the slave is set to read-only, it cannot be exposed to an untrusted network environment
slave-read-only yes
 
# Set the interval for the slave to send pings to the master
# repl-ping-slave-period 10
 
# set Data transmission I/O, host data, ping response timeout, default 60s
# This time must be longer than repl-ping-slave-period, otherwise the timeout will be continuously detected
# repl-timeout 60
 
# Whether it is on the slave socket after SYNC Disable TCP_NODELAY?
# If you set it to yes, Redis will use a small amount of TCP packets and a small amount of bandwidth to send data to the slave.
# But this will cause a delay on the slave side. If using the default settings of the Linux kernel, about 40ms.
# If you set it to no, then the research on the slave side will be reduced but the synchronization bandwidth will be increased.
# By default we optimize for low latency.
# But if the traffic is particularly large or the master and slave servers are far apart, it is more reasonable to set it to yes.
repl-disable-tcp-nodelay no
 
# Set the slave priority, the default is 100
# When the master server does not work correctly, the lower number is first promoted to the master server, but 0 is disabled to select
slave-priority 100
 
##### #########Security################
 
# Set the client connection password, because the response speed of Redis can reach 100w times per second, so the password should be particularly complex
# requirepass foobared
 
# Command rename, or disable.
# Renaming the command to an empty string can disable some dangerous commands like: FLUSHALL delete all data
# Note that writing to AOF files or passing command aliases to slave may cause some problems
# rename-command CONFIG ""
 
## #############limit##################
 
# Set the maximum number of linked clients, the default is 10000.
# The actual number of requests that can be accepted is the set value minus 32, which is reserved by Redis for internal file descriptors
# maxclients 10000
 
# Set the maximum amount of memory used, especially useful when using Redis as an LRU cache.
# Set the value to be smaller than the value that the system can use
# Because when the deletion algorithm is enabled, the slave output cache also occupies memory
# maxmemory <bytes>
 
# When the maximum memory limit is reached, which deletion algorithm is used
# volatile-lru uses LRU algorithm to remove keys with expired Peugeot
# allkeys-lru -> removes any key with LRU algorithm
# volatile-random -> randomly removes a key with expired Peugeot
# allkeys-random -> Randomly remove a key
# volatile-ttl -> remove the key that will expire recently
# noeviction -> do not delete the key, when there is a write request, return an error
# The default setting is volatile-lru
# maxmemory-policy volatile-lru
 
# LRU And the minimum TTL algorithm is not precisely implemented
# To save memory only select a least recently used key within a sample range, you can set this sample size
# maxmemory-samples 3
 
############## AO mode #################
 
# AOF and RDB persistence can be enabled at the same time
# Redis will read the AOF file when it starts, and the AOF file has better persistence guarantee
appendonly no
 
# The save name of AOF, the default is appendonly.aof
# appendfilename appendonly.aof
 
# Set when to write the append log, and three modes
# no: Indicates when the operating system decides when to write. Best performance but least reliable
# everysec: Indicates that a write is performed every second. A compromise solution, recommended
# always: Indicates that it is written to disk every time. Worst performance, some safer than above
appendfsync everysec
 
# When AOF synchronization policy is set to always or everysec
# When background storage process (background storage or AOF log background write) will generate a lot of disk overhead
# Some Linux configurations make Redis Because fsync() calls block for a long time
# There is no fix yet, and even fsyncing with different threads will block our synchronous write(2) calls.
# To mitigate this, use the following options when a BGSAVE or BGREWRITEAOF is running
# to prevent fsync() from being called in the main program,
no-appendfsync-on-rewrite no
 
# AOF auto-rewrite (merge commands, reduce log size) )
# When the AOF log size increases to a certain ratio, Redis calls BGREWRITEAOF to automatically rewrite the log file
# Principle: Redis will record the file size of the AOF file after the last rewrite.
# If it has just started, record the AOF size at startup
# This basic size will be used to compare with the current size. A rewrite is triggered if the current size is larger than a certain ratio.
# You also need to specify a minimum value at which the AOF needs to be overridden to avoid hitting the ratio.
# But the AOF file is rewritten when the AOF file is still small.
# Set to 0 to disable auto-rewrite
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
 
##############LUA scripts################ #
# The maximum execution time of the Lua script, in milliseconds
# An error will be reported after the timeout, and it will be included in the log
# When a script running time exceeds the maximum execution time
# Only two commands, SCRIPT KILL and SHUTDOWN NOSAVE, can be used.
# SCRIPT KILL is used to stop scripts that do not invoke the write command.
# SHUTDOWN NOSAVE is the only way
to shut down the server when the write command of the script is being executed # and the user does not want to wait for the normal end of the script.
# Set the following options to 0 or a negative number to cancel the script execution time limit
lua-time-limit 5000
 
##############Slow queries############## ####
 
# The Redis slow query log records queries that exceed the set time, and only records the time when the command is executed
# I/O operations are not recorded, such as: interacting with the client, sending replies, etc.
# The time unit is microseconds, 1000000 microseconds = 1 second
# Set to a negative number to disable the slow query log, set to 0 to log all query commands
slowlog-log-slower-than 10000
 
# There is no limit to the length of the log, but it will consume memory. After the log length is exceeded, the oldest record will be removed
# Use the SLOWLOG RESET command to reclaim memory
slowlog-max-len 128
 
##############Advanced settings################
 
# When there are few entries When hashing uses memory-efficient data structures. The largest entry also cannot exceed the set threshold. # "Small" is defined as follows:
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
 
# Like hash encoding, small lists are encoded in a special way to save memory. "Small" is set as follows:
list-max-ziplist-entries 512
list-max-ziplist-value 64
 
# Collections use special encoding to save memory only in the following cases
# --> Collections are all 64-bit signed decimal A string of integers consisting of
# The following options set the size of this particular collection.
set-max-intset-entries 512
 
# When the length and elements of the sorted set are set to the following numbers, special encoding saves memory
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
 
# Hash refresh use 1 millisecond of every 100 CPU milliseconds to help flush the main hash table (top-level key-value map).
# The Redis hash table uses a delayed refresh mechanism, the more operations, the more refreshes.
# If the server is idle, the refresh operation will not be performed, and more memory will be occupied by the hash table
# By default, the main dictionary is refreshed 10 times per second to release memory.
# If you have a hard latency requirement, the occasional 2ms delay is unbearable. Set to no
# Otherwise set to yes
activerehashing yes
 
# The client output cache limit forces the disconnection of the client with slower reading speed
# There are three types of restrictions
# normal -> 正茶堂you client
# slave -> slave and MONITOR
# pubsub -> The client subscribes to at least one channel or mode
# The syntax of the client output buffer limit is as follows (time unit: seconds)
# client-output-buffer-limit <category> <mandatory limit> <soft limit> <soft time >
# Reach the mandatory limit cache size, immediately disconnect the link.
# When the soft limit is reached, there will still be a link time of the soft time size
# The default normal client has no limit, only after the request, the asynchronous client data request speed is faster than the speed it can read the data
# Subscription mode and master-slave client The endpoints are again limited by default, since they both accept pushes.
# Both the hard limit and the soft limit can be set to 0 to disable this feature
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
 
# Set the execution frequency of Redis background tasks, such as clearing expired key tasks.
# The setting range is 1 to 500, the default is 10. The larger the CPU consumption, the smaller the delay.
# It is recommended not to exceed 100
hz 10
 
# When the child process rewrites the AOF file and the following options are enabled, the AOF file will be synchronized once every 32M of data generated.
# This helps to write files to disk faster and avoid delays
# aof-rewrite-incremental-fsync yes
 
##############include ############ ##### #Introduction
 
of standard templates
# include /path/to/other.conf

3. Start the redis service and

execute the cmd command to enter the redis installation directory.

Execute: redis-server.exe redis01.conf

4. Test the redis service and

enter , In the redis installation directory, execute the following command to test

redis-cli.exe -h 127.0.0.1 -p 6379

set testkey 123

get testkey The

next line will output

"123" to the console

--- So far the test is successful



5, and then start another redis service, The steps are as above.

Special attention: you need to build another configuration file: redis02.conf

You can copy the content in redis01.conf, change the port in the file content to: 6380, [6379] is the default port number,

then open a cmd window, enter the redis installation directory, and execute: redis-server.exe redis02.conf

6. Jedis client code

import java.util.ArrayList;
import java.util.List;

import redis.clients.jedis.JedisPoolConfig;
import redis.clients.jedis.JedisShardInfo;
import redis.clients.jedis.ShardedJedis;
import redis.clients. jedis.ShardedJedisPool;


public class MainTest {

    /**
     * @param args
     */
    public static void main(String[] args) {
        List<JedisShardInfo> shards = new ArrayList<JedisShardInfo>();
        shards.add(new JedisShardInfo(" 127.0.0.1", 6379));
        shards.add(new JedisShardInfo("127.0.0.1", 6380));

        ShardedJedisPool sjp = new ShardedJedisPool(new JedisPoolConfig(), shards);
        ShardedJedis shardClient = sjp.getResource();
        try {
            shardClient.set("A", "123");
            shardClient.set("B", "234");
            shardClient.set("C", "345");

            try {
                System.out.println(shardClient.get("A"));
            } catch (Exception e) {
                e.printStackTrace();
            }

            try {
                System.out.println(shardClient.get("B"));
            } catch (Exception e) {
                e.printStackTrace();
            }

            try {
                System.out.println(shardClient.get("C"));
            } catch (Exception e) {
                e.printStackTrace();
            }
        } catch (Exception e) {
            e.printStackTrace();
        } finally {
            sjp.returnResource(shardClient);
        }
    }

}


7、OK


转载:
http://my.oschina.net/hanshubo/blog/377910

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326966855&siteId=291194637