redis scan命令导致redis连接耗尽,线程上锁的解决
使用redis scan方法无法获取connection,导致线程锁死。
0、关键字
redis
springboot
redistemplate
scan
try-with-resource
1、异常现象
应用部署后,功能正常使用,但约数小时左右,部分功能接口异常,接口请求无响应。
2、异常排查
查看堆栈信息,jstask pid。首先找到java进程pid;输出堆栈信息至log文件,jstask 30 > stask.log,看到与redis相关的日志,线程状态为waiting。
"pool-13-thread-6" prio=10 tid=0x00007f754800e800 nid=0x71b5 waiting on condition [0x00007f758f0ee000] java.lang.Thread.State: WAITING (parking) at sun.misc.Unsafe.park(Native Method) - parking to wait for <0x0000000779b75f40> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:186) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2043) at org.apache.commons.pool2.impl.LinkedBlockingDeque.takeFirst(LinkedBlockingDeque.java:583) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:442) at org.apache.commons.pool2.impl.GenericObjectPool.borrowObject(GenericObjectPool.java:363) at redis.clients.util.Pool.getResource(Pool.java:49) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:99) at org.reborndb.reborn.RoundRobinJedisPool.getResource(RoundRobinJedisPool.java:300) at com.le.smartconnect.adapter.spring.RebornConnectionFactory.getConnection(RebornConnectionFactory.java:43) at org.springframework.data.redis.core.RedisConnectionUtils.doGetConnection(RedisConnectionUtils.java:128) at org.springframework.data.redis.core.RedisConnectionUtils.getConnection(RedisConnectionUtils.java:91) at org.springframework.data.redis.core.RedisConnectionUtils.getConnection(RedisConnectionUtils.java:78) at xxx.run(xxx.java:80) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Locked ownable synchronizers: - <0x000000074f529b08> (a java.util.concurrent.ThreadPoolExecutor$Worker)
也就是说,redis连接获取不到,线程一直在等待可用的redis连接。大概率是应用中有功能模块获取到连接,并没有释放。找到一个功能使用了scan,具体如下:
public void releaseCallbackMessage() throws Exception { Cursor<Map.Entry<Object, Object>> cursor = RedisCacheUtils.scan(key) if (cursor == null) { logger.info("通过scan(H key, ScanOptions options)方法获取匹配键值对记录为空"); return; } while (cursor.hasNext()) { // 遍历缓存 Map.Entry<Object, Object> entry = cursor.next(); String key = String.valueOf(entry.getKey()); } } }
查看scan源码,发现其使用过程中,并未主动释放connection,而get/set操作均会主动释放connection
public Cursor<Entry<HK, HV>> scan(K key, ScanOptions options) { byte[] rawKey = rawKey(key); return template.executeWithStickyConnection( (RedisCallback<Cursor<Entry<HK, HV>>>) connection -> new ConvertingCursor<>(connection.hScan(rawKey, options), new Converter<Entry<byte[], byte[]>, Entry<HK, HV>>() { @Override public Entry<HK, HV> convert(final Entry<byte[], byte[]> source) { return new Entry<HK, HV>() { @Override public HK getKey() { return deserializeHashKey(source.getKey()); } @Override public HV getValue() { return deserializeHashValue(source.getValue()); } @Override public HV setValue(HV value) { throw new UnsupportedOperationException("Values cannot be set when scanning through entries."); } }; } })); }
get操作源码finally中有releaseConnection操作。
@Nullable public <T> T execute(RedisCallback<T> action, boolean exposeConnection, boolean pipeline) { Assert.isTrue(initialized, "template not initialized; call afterPropertiesSet() before using it"); Assert.notNull(action, "Callback object must not be null"); RedisConnectionFactory factory = getRequiredConnectionFactory(); RedisConnection conn = null; try { if (enableTransactionSupport) { // only bind resources in case of potential transaction synchronization conn = RedisConnectionUtils.bindConnection(factory, enableTransactionSupport); } else { conn = RedisConnectionUtils.getConnection(factory); } boolean existingConnection = TransactionSynchronizationManager.hasResource(factory); RedisConnection connToUse = preProcessConnection(conn, existingConnection); boolean pipelineStatus = connToUse.isPipelined(); if (pipeline && !pipelineStatus) { connToUse.openPipeline(); } RedisConnection connToExpose = (exposeConnection ? connToUse : createRedisConnectionProxy(connToUse)); T result = action.doInRedis(connToExpose); // close pipeline if (pipeline && !pipelineStatus) { connToUse.closePipeline(); } // TODO: any other connection processing? return postProcessResult(result, connToUse, existingConnection); } finally { RedisConnectionUtils.releaseConnection(conn, factory); } }
3、解决方式
scan操作后,主动关闭游标,使用try(resource) catch(exception)方式编码。
1、redis scan操作记住需要主动关闭cursor,即cursor.close;
2、加强规范编码;
try (Cursor<Map.Entry<Object, Object>> cursor = RedisCacheUtils.scan(key)) { if (cursor == null) { logger.info("通过scan(H key, ScanOptions options)方法获取匹配键值对记录为空"); return; } while (cursor.hasNext()) { // 遍历缓存 Map.Entry<Object, Object> entry = cursor.next(); String key = String.valueOf(entry.getKey()); } } catch (Exception ex) { logger.info(ex.toString()); }
关于 try-with-resources用法需要提一点的就是,resources对象必须是实现了 java.lang.AutoCloseable接口,才会自动关闭对象。
补充知识:redis连接未释放,导致redis连接池满,从而应用服务不可用的问题定位和解决
版本提交测试验收后,跑了几天,今天测试突然跑来说平台不可用。
1. 我先是试图登录平台,发现首页可以进入,但是登录不成功。很显然是后台的问题。
2. 再看MQ中,发现消息堆积在队列中,未被消费掉,同时一点一点变化,说明很有可能是哪里有内存或连接的泄露或未释放。
3. 接着登录阿里云账号,查看redis监控,发现连接数已经达到9000多。
4. 查看日志发现大量的redis连接拒绝错误
redis.clients.jedis.exceptions.JedisConnectionException: Could not get a resource from the pool at redis.clients.util.Pool.getResource(Pool.java:42) at redis.clients.jedis.JedisPool.getResource(JedisPool.java:84) at com.***(**.java:58) at com.***(**.java:86) at com.***(**.java:27) at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251) at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66) at org.apache.log4j.Category.callAppenders(Category.java:206) at org.apache.log4j.Category.forcedLog(Category.java:391) at org.apache.log4j.Category.log(Category.java:856) at org.slf4j.impl.Log4jLoggerAdapter.error(Log4jLoggerAdapter.java:571) at com.***(**.java:61) at com.***(**.java:86) at com.***(**.java:27) at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251) at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66) at org.apache.log4j.Category.callAppenders(Category.java:206) at org.apache.log4j.Category.forcedLog(Category.java:391) at org.apache.log4j.Category.log(Category.java:856) at org.slf4j.impl.Log4jLoggerAdapter.error(Log4jLoggerAdapter.java:571) at com.***(**.java:61) at com.***(**.java:86) at com.***(**.java:27) at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251)
5. 当然后台的tomcat中也报了其他错误,比如:
Exception in thread "Thread-18" java.lang.StackOverflowError at java.util.Hashtable.get(Hashtable.java:367) at java.util.Properties.getProperty(Properties.java:969) at java.lang.System.getProperty(System.java:720) at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:86) at sun.security.action.GetPropertyAction.run(GetPropertyAction.java:52) at java.security.AccessController.doPrivileged(Native Method) at java.io.PrintWriter.<init>(PrintWriter.java:116) at java.io.PrintWriter.<init>(PrintWriter.java:100) at org.apache.log4j.DefaultThrowableRenderer.render(DefaultThrowableRenderer.java:58) at org.apache.log4j.spi.ThrowableInformation.getThrowableStrRep(ThrowableInformation.java:87) at com.aliyun.openservices.log.log4j.LoghubAppender.append(LoghubAppender.java:116) at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251) at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66) at org.apache.log4j.Category.callAppenders(Category.java:206) at org.apache.log4j.Category.forcedLog(Category.java:391) at org.apache.log4j.Category.log(Category.java:856) at org.slf4j.impl.Log4jLoggerAdapter.error(Log4jLoggerAdapter.java:571) at com.***(**.java:61) at com.***(**.java:86) at com.***(**.java:27) at org.apache.log4j.AppenderSkeleton.doAppend(AppenderSkeleton.java:251) at org.apache.log4j.helpers.AppenderAttachableImpl.appendLoopOnAppenders(AppenderAttachableImpl.java:66) at org.apache.log4j.Category.callAppenders(Category.java:206) at org.apache.log4j.Category.forcedLog(Category.java:391) at org.apache.log4j.Category.log(Category.java:856)
6. 但是,还是继续查看日志,发现一直有个 [TaskId]PUPHVUNVJSSMKOTQPKHRSPOMUKKOKLPG [Message]null [Result]0 debug日志,因为只有一直不停的发,且连接不关闭才可能出现这么多连接的情况。一边对应代码,发现是页面上调用后台代码,发送给设备长连接的代码。
try{ for(...) { jedis = RedisManagerPool.getJedis(); if(!"NULL".equals(value)) { break; } RedisUtils.return(jedis); } } catch() { logger.error(); RedisUtils.returnBroken(jedis); } return value;
且try中没有finally块,很显然如果条件满足的话就直接break,并return value。但是RedisUtils.return(jedis)这条语句就未执行。然后进一步怀疑页面是否是定时去获取,通过F12,发现每10s钟请求一次,页面上要获取设备的上下行速率。所以会累积这么多的请求。最后,修改也比较简单,添加finally块,保证RedisUtils.return(jedis)必定会执行。
8. 接下来在另一个开发环境继续复现,我们将redisManagerPool中设置maxTotal=300,maxIdle=30。而原先是3000、300,这样有利于快速复现。
果然,一上午时间就达到了300的限制。出现了一样的问题。
9. 综上,问题定位清楚,且修复该问题。
a) 对于oss redis之类的第三方网络连接,必须要有finally块执行。否则后续很容易由于不规范的编码,出现这种连接未正常释放的问题。
b) 定位问题,还是需要有日志。如果单从代码去查,方向会比较多且很容易浪费时间。
c) 修改池大小,缩短复现时间,快速定位修改。
以上这篇redis scan命令导致redis连接耗尽,线程上锁的解决就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持我们。