Spring Cloud Gateway 内存溢出的解决方案

记 Spring Cloud Gateway 内存溢出查询过程

环境配置:

  • org.springframework.boot : 2.1.4.RELEASE
  • org.springframework.cloud :Greenwich.SR1

事故记录:

由于网关存在 RequestBody 丢失的情况,顾采用了网上的通用解决方案,使用如下方式解决:

@Bean
public RouteLocator tpauditRoutes(RouteLocatorBuilder builder) {
    return builder.routes().route("gateway-post", r -> r.order(1)
       .method(HttpMethod.POST)
       .and()
       .readBody(String.class, requestBody -> {return true;}) # 重点在这
       .and()
       .path("/gateway/**")
       .filters(f -> {f.stripPrefix(1);return f;})
       .uri("lb://APP-API")).build();
}

测试环境,Spring Cloud Gateway 网关功能编写完成。开始进行测试环境压测。

正常采用梯度压测方式,最高用户峰值设置为400并发。经历两轮时长10分钟左右压测,没有异常情况出现。

中午吃饭时间,设置了1个小时的时间进行测试。

回来的时候系统报出如下异常

2019-08-12 15:06:07,296 1092208 [reactor-http-server-epoll-12] WARN  io.netty.channel.AbstractChannelHandlerContext.warn:146 - An exception '{}' [enable DEBUG level for full stacktrace] was thrown by a user handler's exceptionCaught() method while handling the following exception:
io.netty.util.internal.OutOfDirectMemoryError: failed to allocate 16777216 byte(s) of direct memory (used: 503316487, max: 504889344)
 at io.netty.util.internal.PlatformDependent.incrementMemoryCounter(PlatformDependent.java:640)
 at io.netty.util.internal.PlatformDependent.allocateDirectNoCleaner(PlatformDependent.java:594)
 at io.netty.buffer.PoolArena$DirectArena.allocateDirect(PoolArena.java:764)
 at io.netty.buffer.PoolArena$DirectArena.newChunk(PoolArena.java:740)
 at io.netty.buffer.PoolArena.allocateNormal(PoolArena.java:244)
 at io.netty.buffer.PoolArena.allocate(PoolArena.java:214)
 at io.netty.buffer.PoolArena.allocate(PoolArena.java:146)
 at io.netty.buffer.PooledByteBufAllocator.newDirectBuffer(PooledByteBufAllocator.java:324)
 at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:185)
 at io.netty.buffer.AbstractByteBufAllocator.directBuffer(AbstractByteBufAllocator.java:176)
 at io.netty.buffer.AbstractByteBufAllocator.ioBuffer(AbstractByteBufAllocator.java:137)
 at io.netty.channel.DefaultMaxMessagesRecvByteBufAllocator$MaxMessageHandle.allocate(DefaultMaxMessagesRecvByteBufAllocator.java:114)
 at io.netty.channel.epoll.EpollRecvByteAllocatorHandle.allocate(EpollRecvByteAllocatorHandle.java:72)
 at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:793)
 at io.netty.channel.epoll.AbstractEpollChannel$AbstractEpollUnsafe$1.run(AbstractEpollChannel.java:382)
 at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:163)
 at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:404)
 at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:315)
 at io.

当时一脸懵逼,马上开始监控 Jvm 堆栈,减少jvm的内存空间,提升并发数以后,重启项目重新压测,

项目启动参数如下:

java -jar -Xmx1024M /opt/deploy/gateway-appapi/cloud-employ-gateway-0.0.5-SNAPSHOT.jar
↓↓↓↓修改为↓↓↓↓
java -jar -Xmx512M /opt/deploy/gateway-appapi/cloud-employ-gateway-0.0.5-SNAPSHOT.jar

缩减了一半内存启动,等待问题复现。等待3分钟问题再次复现,但是同时Jvm却的进行了Full GC。

      EC       EU        OC         OU       MC     MU    CCSC   CCSU   YGC     YGCT    FGC    FGCT
 275456.0 100103.0  484864.0   50280.2  67672.0 64001.3 9088.0 8463.2    501   11.945   3      0.262
 275968.0 25072.3   484864.0   47329.3  67672.0 63959.4 9088.0 8448.8    502   11.970   4      0.429 

没错,在出现问题的时候,系统出现了Full Gc,但是OU并没有达到触发的原因。

结合日志中的 direct memory,想到了Jvm 中的堆外内存。

使用 -XX:MaxDirectMemorySize 可以进行设置 Jvm 堆外内存大小,当 Direct ByteBuffer 分配的堆外内存到达指定大小后,即触发Full GC。

该值是有上限的,默认是64M,最大为 sun.misc.VM.maxDirectMemory()。

结合所有情况,表明堆外内存使用存在内存溢出的情况。

报错内容为Netty框架,新增以下配置,开启Netty错误日志打印:

-Dio.netty.leakDetection.targetRecords=40 #设置Records 上限
-Dio.netty.leakDetection.level=advanced   #设置日志级别

项目启动,没任何问题,开启压测后服务报出如下异常:

2019-08-13 14:59:01,656 18047 [reactor-http-nio-7] ERROR io.netty.util.ResourceLeakDetector.reportTracedLeak:317 - LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.
Recent access records:
#1:
	org.springframework.core.io.buffer.NettyDataBuffer.release(NettyDataBuffer.java:301)
	org.springframework.core.io.buffer.DataBufferUtils.release(DataBufferUtils.java:420)
	org.springframework.core.codec.StringDecoder.decodeDataBuffer(StringDecoder.java:208)
	org.springframework.core.codec.StringDecoder.decodeDataBuffer(StringDecoder.java:59)
	org.springframework.core.codec.AbstractDataBufferDecoder.lambda$decodeToMono$1(AbstractDataBufferDecoder.java:68)
	reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:107)
	reactor.core.publisher.FluxContextStart$ContextStartSubscriber.onNext(FluxContextStart.java:103)
	reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:287)
	reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onNext(FluxFilterFuseable.java:331)
	reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1505)
	reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onComplete(MonoCollectList.java:123)
	reactor.core.publisher.FluxJust$WeakScalarSubscription.request(FluxJust.java:101)
	reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onSubscribe(MonoCollectList.java:90)
	reactor.core.publisher.FluxJust.subscribe(FluxJust.java:70)
	reactor.core.publisher.FluxDefer.subscribe(FluxDefer.java:54)
	reactor.core.publisher.MonoCollectList.subscribe(MonoCollectList.java:59)
	reactor.core.publisher.MonoFilterFuseable.subscribe(MonoFilterFuseable.java:44)
	reactor.core.publisher.MonoMapFuseable.subscribe(MonoMapFuseable.java:56)
	reactor.core.publisher.MonoSubscriberContext.subscribe(MonoSubscriberContext.java:47)
	reactor.core.publisher.MonoMapFuseable.subscribe(MonoMapFuseable.java:59)
	reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
	reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
	reactor.core.publisher.MonoPeek.subscribe(MonoPeek.java:71)
	reactor.core.publisher.MonoMap.subscribe(MonoMap.java:55)
	reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150)
	reactor.core.publisher.FluxContextStart$ContextStartSubscriber.onNext(FluxContextStart.java:103)
	reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:287)
	reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onNext(FluxFilterFuseable.java:331)
	reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1505)
	reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onComplete(MonoCollectList.java:123)
	reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136)
	reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:252)
	reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136)
	reactor.netty.channel.FluxReceive.terminateReceiver(FluxReceive.java:372)
	reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:196)
	reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:337)
	reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:333)
	reactor.netty.http.server.HttpServerOperations.onInboundNext(HttpServerOperations.java:453)
	reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	reactor.netty.http.server.HttpTrafficHandler.channelRead(HttpTrafficHandler.java:191)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
	io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
	io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
	io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
	io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:677)
	io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
	io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
	io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
	java.lang.Thread.run(Unknown Source)
#2:
	io.netty.buffer.AdvancedLeakAwareByteBuf.nioBuffer(AdvancedLeakAwareByteBuf.java:712)
	org.springframework.core.io.buffer.NettyDataBuffer.asByteBuffer(NettyDataBuffer.java:266)
	org.springframework.core.codec.StringDecoder.decodeDataBuffer(StringDecoder.java:207)
	org.springframework.core.codec.StringDecoder.decodeDataBuffer(StringDecoder.java:59)
	org.springframework.core.codec.AbstractDataBufferDecoder.lambda$decodeToMono$1(AbstractDataBufferDecoder.java:68)
	reactor.core.publisher.FluxMapFuseable$MapFuseableSubscriber.onNext(FluxMapFuseable.java:107)
	reactor.core.publisher.FluxContextStart$ContextStartSubscriber.onNext(FluxContextStart.java:103)
	reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:287)
	reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onNext(FluxFilterFuseable.java:331)
	reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1505)
	reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onComplete(MonoCollectList.java:123)
	reactor.core.publisher.FluxJust$WeakScalarSubscription.request(FluxJust.java:101)
	reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onSubscribe(MonoCollectList.java:90)
	reactor.core.publisher.FluxJust.subscribe(FluxJust.java:70)
	reactor.core.publisher.FluxDefer.subscribe(FluxDefer.java:54)
	reactor.core.publisher.MonoCollectList.subscribe(MonoCollectList.java:59)
	reactor.core.publisher.MonoFilterFuseable.subscribe(MonoFilterFuseable.java:44)
	reactor.core.publisher.MonoMapFuseable.subscribe(MonoMapFuseable.java:56)
	reactor.core.publisher.MonoSubscriberContext.subscribe(MonoSubscriberContext.java:47)
	reactor.core.publisher.MonoMapFuseable.subscribe(MonoMapFuseable.java:59)
	reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
	reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
	reactor.core.publisher.MonoPeek.subscribe(MonoPeek.java:71)
	reactor.core.publisher.MonoMap.subscribe(MonoMap.java:55)
	reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150)
	reactor.core.publisher.FluxContextStart$ContextStartSubscriber.onNext(FluxContextStart.java:103)
	reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:287)
	reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onNext(FluxFilterFuseable.java:331)
	reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1505)
	reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onComplete(MonoCollectList.java:123)
	reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136)
	reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:252)
	reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136)
	reactor.netty.channel.FluxReceive.terminateReceiver(FluxReceive.java:372)
	reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:196)
	reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:337)
	reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:333)
	reactor.netty.http.server.HttpServerOperations.onInboundNext(HttpServerOperations.java:453)
	reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	reactor.netty.http.server.HttpTrafficHandler.channelRead(HttpTrafficHandler.java:191)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
	io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
	io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
	io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
	io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:677)
	io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
	io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
	io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
	java.lang.Thread.run(Unknown Source)
#3:
	io.netty.buffer.AdvancedLeakAwareByteBuf.slice(AdvancedLeakAwareByteBuf.java:82)
	org.springframework.core.io.buffer.NettyDataBuffer.slice(NettyDataBuffer.java:260)
	org.springframework.core.io.buffer.NettyDataBuffer.slice(NettyDataBuffer.java:42)
	org.springframework.cloud.gateway.handler.predicate.ReadBodyPredicateFactory.lambda$null$0(ReadBodyPredicateFactory.java:102)
	reactor.core.publisher.FluxDefer.subscribe(FluxDefer.java:46)
	reactor.core.publisher.MonoCollectList.subscribe(MonoCollectList.java:59)
	reactor.core.publisher.MonoFilterFuseable.subscribe(MonoFilterFuseable.java:44)
	reactor.core.publisher.MonoMapFuseable.subscribe(MonoMapFuseable.java:56)
	reactor.core.publisher.MonoSubscriberContext.subscribe(MonoSubscriberContext.java:47)
	reactor.core.publisher.MonoMapFuseable.subscribe(MonoMapFuseable.java:59)
	reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
	reactor.core.publisher.MonoOnErrorResume.subscribe(MonoOnErrorResume.java:44)
	reactor.core.publisher.MonoPeek.subscribe(MonoPeek.java:71)
	reactor.core.publisher.MonoMap.subscribe(MonoMap.java:55)
	reactor.core.publisher.MonoFlatMap$FlatMapMain.onNext(MonoFlatMap.java:150)
	reactor.core.publisher.FluxContextStart$ContextStartSubscriber.onNext(FluxContextStart.java:103)
	reactor.core.publisher.FluxMapFuseable$MapFuseableConditionalSubscriber.onNext(FluxMapFuseable.java:287)
	reactor.core.publisher.FluxFilterFuseable$FilterFuseableConditionalSubscriber.onNext(FluxFilterFuseable.java:331)
	reactor.core.publisher.Operators$MonoSubscriber.complete(Operators.java:1505)
	reactor.core.publisher.MonoCollectList$MonoBufferAllSubscriber.onComplete(MonoCollectList.java:123)
	reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136)
	reactor.core.publisher.FluxPeek$PeekSubscriber.onComplete(FluxPeek.java:252)
	reactor.core.publisher.FluxMap$MapSubscriber.onComplete(FluxMap.java:136)
	reactor.netty.channel.FluxReceive.terminateReceiver(FluxReceive.java:372)
	reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:196)
	reactor.netty.channel.FluxReceive.onInboundComplete(FluxReceive.java:337)
	reactor.netty.channel.ChannelOperations.onInboundComplete(ChannelOperations.java:333)
	reactor.netty.http.server.HttpServerOperations.onInboundNext(HttpServerOperations.java:453)
	reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:141)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	reactor.netty.http.server.HttpTrafficHandler.channelRead(HttpTrafficHandler.java:191)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:438)
	io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:323)
	io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:297)
	io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:253)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:337)
	io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1408)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:359)
	io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:345)
	io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:930)
	io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163)
	io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:677)
	io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:612)
	io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:529)
	io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:491)
	io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:905)
	java.lang.Thread.run(Unknown Source)

在 #3 中,我发现了一个眼熟的类,ReadBodyPredicateFactory.java ,还记得最开始的时候使用 readbody 配置么?

这里就是进行 cachedRequestBodyObject 的写入类,

追踪一下Readbody源码

 /**
  * This predicate is BETA and may be subject to change in a future release. A
  * predicate that checks the contents of the request body
  * @param inClass the class to parse the body to
  * @param predicate a predicate to check the contents of the body
  * @param <T> the type the body is parsed to
  * @return a {@link BooleanSpec} to be used to add logical operators
  */
 public <T> BooleanSpec readBody(Class<T> inClass, Predicate<T> predicate) {
  return asyncPredicate(getBean(ReadBodyPredicateFactory.class)
    .applyAsync(c -> c.setPredicate(inClass, predicate)));
 }

异步调用的 ReadBodyPredicateFactory.applyAsync() 和 错误日志中的

org.springframework.cloud.gateway.handler.predicate.ReadBodyPredicateFactory.lambda$null$0(ReadBodyPredicateFactory.java:102)

指向方法一致。查看源码102行:

Flux<DataBuffer> cachedFlux = Flux.defer(() ->
 Flux.just(dataBuffer.slice(0, dataBuffer.readableByteCount()))
);

此处 Spring Cloud Gateway 通过 dataBuffer.slice 切割出了新的 dataBuffer,但是通过 Netty 的内存检测工具判断,此处的 dataBuffer 并没有被回收。

错误如下,日志很多容易被忽视。

ERROR io.netty.util.ResourceLeakDetector.reportTracedLeak:317 - LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information.

找到问题那就要解决才行,尝试修改源码

@Override
@SuppressWarnings("unchecked")
public AsyncPredicate<ServerWebExchange> applyAsync(Config config) {
    return exchange -> {
        Class inClass = config.getInClass();

        Object cachedBody = exchange.getAttribute(CACHE_REQUEST_BODY_OBJECT_KEY);
        Mono<?> modifiedBody;
        // We can only read the body from the request once, once that
        // happens if we
        // try to read the body again an exception will be thrown. The below
        // if/else
        // caches the body object as a request attribute in the
        // ServerWebExchange
        // so if this filter is run more than once (due to more than one
        // route
        // using it) we do not try to read the request body multiple times
        if (cachedBody != null) {
            try {
                boolean test = config.predicate.test(cachedBody);
                exchange.getAttributes().put(TEST_ATTRIBUTE, test);
                return Mono.just(test);
            } catch (ClassCastException e) {
                if (LOGGER.isDebugEnabled()) {
                    LOGGER.debug("Predicate test failed because class in predicate "
                            + "does not match the cached body object", e);
                }
            }
            return Mono.just(false);
        } else {
            // Join all the DataBuffers so we have a single DataBuffer for
            // the body
            return DataBufferUtils.join(exchange.getRequest().getBody()).flatMap(dataBuffer -> {
                // Update the retain counts so we can read the body twice,
                // once to parse into an object
                // that we can test the predicate against and a second time
                // when the HTTP client sends
                // the request downstream
                // Note: if we end up reading the body twice we will run
                // into
                // a problem, but as of right
                // now there is no good use case for doing this
                DataBufferUtils.retain(dataBuffer);
                // Make a slice for each read so each read has its own
                // read/write indexes
                Flux<DataBuffer> cachedFlux = Flux
                        .defer(() -> Flux.just(dataBuffer.slice(0, dataBuffer.readableByteCount())));

                ServerHttpRequest mutatedRequest = new ServerHttpRequestDecorator(exchange.getRequest()) {
                    @Override
                    public Flux<DataBuffer> getBody() {
                        return cachedFlux;
                    }
                };
                # 新增如下代码
                DataBufferUtils.release(dataBuffer);

                return ServerRequest.create(exchange.mutate().request(mutatedRequest).build(), messageReaders)
                        .bodyToMono(inClass).doOnNext(objectValue -> {
                            exchange.getAttributes().put(CACHE_REQUEST_BODY_OBJECT_KEY, objectValue);
                            exchange.getAttributes().put(CACHED_REQUEST_BODY_KEY, cachedFlux);
                        }).map(objectValue -> config.predicate.test(objectValue));
            });

        }
    };
}

Spring Cloud Gateway 在配置的架构中,版本为2.1.1,修改以上代码后,启动项目测试,问题没有复现,正常运行。

同样这个问题,也可以选择升级 Spring Cloud Gateway 版本,在官方2.1.2版本中,此处代码已被重构,升级后测试也完全正常。

以上为个人经验,希望能给大家一个参考,也希望大家多多支持我们。

(0)

相关推荐

  • Spring Cloud Gateway全局异常处理的方法详解

    前言 Spring Cloud Gateway是Spring官方基于Spring 5.0,Spring Boot 2.0和Project Reactor等技术开发的网关,Spring Cloud Gateway旨在为微服务架构提供一种简单而有效的统一的API路由管理方式.Spring Cloud Gateway作为Spring Cloud生态系中的网关,目标是替代Netflix ZUUL,其不仅提供统一的路由方式,并且基于Filter链的方式提供了网关基本的功能,例如:安全,监控/埋点,和限流等

  • 详解SpringCloudGateway内存泄漏问题

    SpringCloudGateway内存泄漏问题 项目完善差不多,在进入压力测试阶段期间,发现了gateway有内存泄漏问题,问题发现的起因是,当时启动一台gateway,一台对应的下游应用服务,在压力测试期间,发现特别不稳定,并发量时高时低,而且会有施压机卡住的现象,然后找到容器对应的宿主机,并使用container stats命令观察内存,经过观察发现,压力测试时内存会暴涨,并由于超过限制最大内存导致容器挂掉(这里由于用的swarm所以会自动选择节点重启)最终发现由于之前测试服务器配置低,所

  • Spring Cloud Gateway不同频率限流的解决方案(每分钟,每小时,每天)

    SpringCloud Gateway 简介 SpringCloud Gateway 是 Spring Cloud 的一个全新项目,该项目是基于 Spring 5.0,Spring Boot 2.0 和 Project Reactor 等技术开发的网关,它旨在为微服务架构提供一种简单有效的统一的 API 路由管理方式. SpringCloud Gateway 作为 Spring Cloud 生态系统中的网关,目标是替代 Zuul,在Spring Cloud 2.0以上版本中,没有对新版本的Zuu

  • Spring Cloud Gateway全局通用异常处理的实现

    为什么需要全局异常处理 在传统 Spring Boot 应用中, 我们 @ControllerAdvice 来处理全局的异常,进行统一包装返回 // 摘至 spring cloud alibaba console 模块处理 @ControllerAdvice public class ConsoleExceptionHandler { @ExceptionHandler(AccessException.class) private ResponseEntity<String> handleAc

  • spring cloud gateway请求跨域问题解决方案

    这篇文章主要介绍了spring cloud gateway请求跨域问题解决方案,文中通过示例代码介绍的非常详细,对大家的学习或者工作具有一定的参考学习价值,需要的朋友可以参考下 代码如下 @Configuration public class CorsConfig implements GlobalFilter, Ordered { private static final String ALL = "*"; private static final String MAX_AGE =

  • Spring Cloud Gateway 内存溢出的解决方案

    记 Spring Cloud Gateway 内存溢出查询过程 环境配置: org.springframework.boot : 2.1.4.RELEASE org.springframework.cloud :Greenwich.SR1 事故记录: 由于网关存在 RequestBody 丢失的情况,顾采用了网上的通用解决方案,使用如下方式解决: @Bean public RouteLocator tpauditRoutes(RouteLocatorBuilder builder) { retu

  • Spring Cloud GateWay 路由转发规则介绍详解

    Spring在因Netflix开源流产事件后,在不断的更换Netflix相关的组件,比如:Eureka.Zuul.Feign.Ribbon等,Zuul的替代产品就是SpringCloud Gateway,这是Spring团队研发的网关组件,可以实现限流.安全认证.支持长连接等新特性. Spring Cloud Gateway Spring Cloud Gateway是SpringCloud的全新子项目,该项目基于Spring5.x.SpringBoot2.x技术版本进行编写,意在提供简单方便.可

  • 详解Spring Cloud Gateway 数据库存储路由信息的扩展方案

    动态路由背景 ​ 无论你在使用Zuul还是Spring Cloud Gateway 的时候,官方文档提供的方案总是基于配置文件配置的方式 例如: # zuul 的配置形式 routes: pig-auth: path: /auth/** serviceId: pig-auth stripPrefix: true # gateway 的配置形式 routes: - id: pigx-auth uri: lb://pigx-auth predicates: - Path=/auth/** filte

  • 基于Nacos实现Spring Cloud Gateway实现动态路由的方法

    简介 该文档主要介绍以Nacos为配置中心,实现Spring Cloud GateWay 实现动态路由的功能.Spring Cloud Gateway启动时候,就将路由配置和规则加载到内存里,无法做到不重启网关就可以动态的对应路由的配置和规则进行增加,修改和删除.通过nacos的配置下发的功能可以实现在不重启网关的情况下,实现动态路由. 集成 Spring Cloud GateWay集成 spring-cloud-starter-gateway:路由转发.请求过滤(权限校验.限流以及监控等) s

  • 解决spring cloud gateway 获取body内容并修改的问题

    之前写过一篇文章,如何获取body的内容. Spring Cloud Gateway获取body内容,不影响GET请求 确实能够获取所有body的内容了,不过今天终端同学调试接口的时候和我说,遇到了400的问题,报错是这样的HTTP method names must be tokens,搜了一下,都是说https引起的.可我的项目还没用https,排除了. 想到是不是因为修改了body内容导致的问题,试着不修改body的内容,直接传给微服务,果然没有报错了. 问题找到,那就好办了,肯定是我新构

  • Spring Cloud Gateway(读取、修改 Request Body)的操作

    Spring Cloud Gateway(以下简称 SCG)做为网关服务,是其他各服务对外中转站,通过 SCG 进行请求转发. 在请求到达真正的微服务之前,我们可以在这里做一些预处理,比如:来源合法性检测,权限校验,反爬虫之类- 因为业务需要,我们的服务的请求参数都是经过加密的. 之前是在各个微服务的拦截器里对来解密验证的,现在既然有了网关,自然而然想把这一步骤放到网关层来统一解决. 如果是使用普通的 Web 编程中(比如用 Zuul),这本就是一个 pre filter 的事儿,把之前 Int

  • spring cloud gateway 如何修改请求路径Path

    一.背景 项目升级改造,老项目使用请求url中特定参数进行服务路由,现使用gateway网关进行路由服务信息 二.根据参数信息修改请求路径Path @Component public class RequestFilter implements GlobalFilter, Ordered { @Override public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) { ServerHttpR

  • spring cloud gateway跨域全局CORS配置方式

    在Spring 5 Webflux中,配置CORS,可以通过自定义WebFilter实现: 注:此种写法需真实跨域访问,监控header中才会带相应属性. 代码实现方式 import org.springframework.http.HttpHeaders; import org.springframework.http.HttpStatus; import org.springframework.http.server.reactive.ServerHttpRequest; import or

随机推荐