Redis:ERR max number of clients reached

ERR max number of clients reached
//根据下面的命令查看是哪个进程一直在占用连接,然后排查

获取得到6399端口的连接信息

netstat -tun | grep 6399 | awk '{print $5}' | awk -F':' '{print $1}' | sort | uniq -c

进入redis-cli后修改下面的参数

 config get maxclients
 config set maxclients=100000   //最大连接数
 config set timeout = 300  //时长

查看redis的连接数:

netstat -na | grep 6379 | wc -l

查看redis的进程

ps aux|grep redis

查看哪个应用程序连接redis

1.查看redis对应的进程号

ps aux | grep redis

ps -ef | grep redis

2.查看端口对应的进程号

lsof  -i:6379

3.根据进程号查看对应的信息,比如端口啥的

netstat -anop | grep PID

新增ERRO:这个错是Redis正在加载持久化文件,是redis运行中出现一些错误,需要加载持久化文件,加载完成就没事了

redis.clients.jedis.exceptions.JedisDataException: LOADING Redis is loading the dataset in memory
	at redis.clients.jedis.Protocol.processError(Protocol.java:131)
	at redis.clients.jedis.Protocol.process(Protocol.java:165)
	at redis.clients.jedis.Protocol.read(Protocol.java:219)
	at redis.clients.jedis.Connection.readProtocolWithCheckingBroken(Connection.java:309)
	at redis.clients.jedis.Connection.getBinaryMultiBulkReply(Connection.java:270)
	at redis.clients.jedis.Connection.getMultiBulkReply(Connection.java:264)
	at redis.clients.jedis.Jedis.zrangeByScore(Jedis.java:2119)
	at cn.com.sse.online_Verification$.getqita(online_Verification.scala:247)
	at cn.com.sse.online_Verification$.onlinematchers(online_Verification.scala:180)
	at cn.com.sse.online_Verification$.TDXparser(online_Verification.scala:137)
	at cn.com.sse.online_Verification$$anonfun$1.apply(online_Verification.scala:54)
	at cn.com.sse.online_Verification$$anonfun$1.apply(online_Verification.scala:50)
	at org.apache.flink.streaming.api.scala.DataStream$$anon$4.map(DataStream.scala:619)
	at org.apache.flink.streaming.api.operators.StreamMap.processElement(StreamMap.java:41)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.pushToOperator(OperatorChain.java:579)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:554)
	at org.apache.flink.streaming.runtime.tasks.OperatorChain$CopyingChainingOutput.collect(OperatorChain.java:534)
	at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:718)
	at org.apache.flink.streaming.api.operators.AbstractStreamOperator$CountingOutput.collect(AbstractStreamOperator.java:696)
	at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collect(StreamSourceContexts.java:104)
	at org.apache.flink.streaming.api.operators.StreamSourceContexts$NonTimestampContext.collectWithTimestamp(StreamSourceContexts.java:111)
	at org.apache.flink.streaming.connectors.kafka.internals.AbstractFetcher.emitRecordWithTimestamp(AbstractFetcher.java:398)
	at org.apache.flink.streaming.connectors.kafka.internal.Kafka010Fetcher.emitRecord(Kafka010Fetcher.java:89)
	at org.apache.flink.streaming.connectors.kafka.internal.Kafka09Fetcher.runFetchLoop(Kafka09Fetcher.java:154)
	at org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.run(FlinkKafkaConsumerBase.java:665)
	at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:94)
	at org.apache.flink.streaming.api.operators.StreamSource.run(StreamSource.java:58)
	at org.apache.flink.streaming.runtime.tasks.SourceStreamTask.run(SourceStreamTask.java:99)
	at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:300)
	at org.apache.flink.runtime.taskmanager.Task.run(Task.java:704)
	at java.lang.Thread.run(Thread.java:745)

猜你喜欢

转载自blog.csdn.net/xiaozhaoshigedasb/article/details/90755496