线上死锁问题排查

问题描述:
线上一个服务的突然挂了,无法被调用,查看该服务日志发现Dubbo的线程池全满了:

2019-11-22 14:35:26.271  WARN 26516 --- [New I/O server worker #1-2] c.a.d.c.t.support.AbortPolicyWithReport  :  [DUBBO] Thread pool is EXHAUSTED! Thread Name: DubboServerHandler-192.168.10.26:12350, Pool Size: 200 (active: 200, core: 200, max: 200, largest: 200), Task: 6786 (completed: 6586), Executor status:(isShutdown:false, isTerminated:false, isTerminating:false), in dubbo://192.168.10.26:12350!, dubbo version: 2.6.2, current host: 192.168.10.26

没有多少访问量,但是线程却猛增,猜测可能是哪里出现了死循环或者哪里发生了死锁。

首先,检测一下服务器的CPU使用量,发现在正常范围内,基本上可以排除哪里出现了死循环。

先找出该服务的进程,用jstack命令dump线程在分析。

"DubboServerHandler-192.168.10.26:12350-thread-200" #240 daemon prio=5 os_prio=0 tid=0x00007ffa7c141800 nid=0x6c89 waiting on condition [0x00007ffa17c7c000]
   java.lang.Thread.State: WAITING (parking)
    at sun.misc.Unsafe.park(Native Method)
    - parking to wait for  <0x00000000e0d24020> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
    at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:590)
    at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:441)
    at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:362)
    at redis.clients.util.Pool.getResource(Pool.java:49)
    at redis.clients.jedis.JedisPool.getResource(JedisPool.java:226)
    at (RedisHelper.java:322)
    at (RedisHelper.java:106)
    at BlockResourceManager.java:54)
    - locked <0x00000000e0ec22d0> 

可以看到最后一条线程locked <0x00000000e0ec22d0>,说明它持有着这个锁,而其他线程状态都是waiting to lock <0x00000000e0ec22d0>
看来就是这条线程导致了其他线程被阻塞。

那么又是什么导致这条线程不能快速释放锁呢?
继续看上面的信息,应该是从JedisPool中获取JedisClient,但是池中一直没有可用的客户端,所以被阻塞。

最后的原因是一个方法中使用完客户端没有返回给池,导致客户端资源被耗尽。

猜你喜欢

转载自www.cnblogs.com/insaneXs/p/11919366.html