redis JedisConnectionException: Could not get a resource from the pool

Reprinted: https://blog.csdn.net/testcs_dn/article/details/43052585

The reason for this error is usually:

 A, Redis does not start;

I once encountered such a problem. sweat!

Second, the firewall reasons unable to connect to Redis;

1, the server firewall inbound rules.

2, outbound rules of the host access Redis application resides.

Three, IP address or port error

Fourth, after Jedis objects used up, to relieve, to prevent would have been occupied, so there will be unable to obtain new resources.

Five, Spring Boot project, the lack of dependence

If you use Redis and Spring Boot, also throw this exception.
If you are using Spring Boot, then Redis dependency is not enough,
you also need to manually download and install redis.io Redis, and then run from the terminal


me @ my_pc: / path / to / redis / dir $ ./src/redis-server ./redis.conf
run the server, you need to add the relevant line in all applications in use Redis:
application.properties:

spring.redis.host: <yourhost> // usually localhost, but can also be on a LAN
spring.redis.port: <yourport> // usually 6379, but settable in redis.conf
application.yml:
...
spring:
redis:
host: <yourhost> // usually localhost, but can also be on a LAN
port: <yourport> // usually 6379, but settable in redis.conf

Six, vm.overcommit_memory = 0, fork failure


Use redis-cli, execute the ping command, anomalies out:

(error)MISCONF Redis is configured to save RDB snapshots, but is currently

not able to persist on disk. Commands that may modify the data set

are disabled. Please check Redis logs for details about the error.

Redis then view the log, there has been

WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.

 

The problem is very clear, bgsave will fork a child process, because, like vm.overcommit_memory = 0, so the application memory size and the parent process, because the redis have used 60% of the memory space, so the fork fail

Solution:

Add /etc/sysctl.conf vm.overcommit_memory = 1

sysctl  vm.overcommit_memory=1

Link: http: //www.jianshu.com/p/bb552ccc43b9
Source: Jane Book
Seven, when jedispool in jedis been fetched to wait more than MaxWaitMillis you set throws Could not get a resource from the pool 
eight connection free connection pool will automatically disconnect after a while, but the connection pool connected properly thought

Reference configuration file:


Config = new new JedisPoolConfig JedisPoolConfig ();
config.setMaxTotal (200 is);
config.setMaxIdle (50);
config.setMinIdle (. 8); // Set the minimum number of idle
config.setMaxWaitMillis (10000);
config.setTestOnBorrow (to true);
config .setTestOnReturn (true);
connection scan the Idle //
config.setTestWhileIdle (to true);
// represents idle object evitor between two scans milliseconds of sleep to
config.setTimeBetweenEvictionRunsMillis (30000);
// each represents idle object evitor the largest number of objects scans
config.setNumTestsPerEvictionRun (10);
// represents an object at least stay in the shortest possible time idle state, before being idle object evitor scanning and deported; this one makes sense only when timeBetweenEvictionRunsMillis greater than 0
config.setMinEvictableIdleTimeMillis (60000);

JedisPool pool = new JedisPool(config, ip, port, 10000, "密码", 0);


Problem Cause Analysis:

redis is the custodian of our data, we can keep readily accessible at any time, big small, unimportant important, it will help us save the uncomplaining, even sometimes, we become lazy, keep things into when the way also posted sheets of paper: "after a week to help me throw it," for these, Redis also silently accepted (who told Antirez the redis designed so well).

This is something to write about this message of the paper.

Redis provides a set of "good" expired data cleansing mechanisms:
active expired: Redis data is inert expired, when a key to the expiration time, Redis will not immediately clean up, but after the visit again if the key expired, it Redis it will take the initiative to clean up.
Passive Expiration: If expired Key has not been accessed, Redis will not have to put it that matter, it would be 10 times per second to perform the following cleanup:
        Random removed from all 20 Key with expiration time in
        if found to have expired, will clean up
        if there are 25% of the Key are expired, they continue to return to the first step again

this outdated mechanism design is like, can be understood: If you currently have more than a quarter of the Key it is expired, then it is kept clean-up, until only a quarter less than the Key is about to expire so far, and then slowly to clean up the random checks.



This error is because JedisPool get a connection, the call jedis.ping () method does not get the right to return "pong" message, the emergence of this problem, not the traffic peak, redis the cpu is not high, so the programmer soon suspect to the network, is the "loss"?

but ran for a few days and found errors in certain fixed time point in time (eg 21:00 - 10:00), does the timing of sweeping aunt in the room every day cleaning, bumped network cable? (Sometimes you have to admire the imagination of the programmer)

However, in a dark and stormy night high night, the programmer's mind at this time are generally the most sober, then warning message again, open redis monitoring, as the only product in the meeting, like shopping, facing various redis monitoring data shopping shopping ah, suddenly, there are several Expires see the data, then the expression can only see the expression marked a big concern, "0.1 fold" on days when the commodity to describe.

The next thing, surely we all guessed 7788, of course, is to pay a single buy,
mind a series of operations, Qiazhiyisuan: "this point in time just to have a large number of key expired", and set collection are large, each have hundreds of thousands of data, then a count: "most in need Redis expired key is that these key."

So, the answer has been, Redis passive expiration clean up, find how random, there are more than a quarter of the Key is expired, they kept busy ah delete delete delete both are coupled with a large collection of , O (N) of the operational complexity, frustration,
etc. redis deleted when finished, the client side commands are timed out and the like.

Find a reason, how to solve? Look right business scenarios, for our scenario,
the practice is to set the expiration time a little longer, then these can be deleted Key mark a bit, throw a background thread where paging delete Set in the data, so even if redis do expiring It will not be deleted with too much time.

Finally, under little summary, for a large collection of objects, when put redis need extra attention, if want to rely on redis expires, it may cause a short expired redis block.

Therefore, to treat redis data, such as the need to clean up their own away, do not wait to help you redis

 

 

 

 

Solution:

Solution:
1:  
Log Redis:
redis-cli
127.0.0.1:6379>config Writes the SET-STOP-ON-bgsave-error NO 

2: Use redis-cli modify rdb directory
CONFIG the SET dir / tmp / redis_data
CONFIG the SET dbfilename the TEMP. rdb
restart redis-server, the problem is solved! But the server needs to be modified each time you start rdb path.

3: redis.conf directly modify the file

dir / tmp / redis_data # / -> / tmp / redis_data.
Dbfilename temp.rdb #

start redis service

Guess you like

Origin www.cnblogs.com/ssjf/p/11225507.html