Comments (17)
It looks like for 9ms you hit a race where the cancellation reaches your doOnCancel
but at the same time the connection is being delivered. That's my expectation as the logs show the connection was established. The connect timeout would apply if the connection didn't succeed to be established, but it succeeds and is delivered to your delayElement()
operator. That operator keeps a reference to the connection. When the delay is longer than the argument to timeout()
it means that the scheduled task that holds the reference to the connection is disposed and the reference is discarded. If there is no discard hook defined on the reactive pipeline then the reference is just abandoned and the connection is not returned to the pool. That's my understanding of the situation. reactor-netty's timeout configuration have no insight into the pipeline that you attach to the results and can't control your code that refers to the connection.
Just trying to help - please check if adding the discard hook that releases the connection helps.
from reactor-netty.
@jkippen With this reproducible example, I'm seeing that the connection is always emitted but your pipeline delays it intentionally. If I add log like below I can see that connection was delivered.
Mono
.defer(() -> client
.connect()
.log()
.doOnCancel(() -> log.debug("********* cancelling"))
...
As the connection is delivered, you have to guarantee that the connection is disposed regardless whether it is cancellation, error or normal flow.
The logs:
10:01:02.562 [reactor-tcp-nio-3] DEBUG r.n.r.DefaultPooledConnectionProvider - [054ae908, L:/127.0.0.1:50560 - R:localhost/127.0.0.1:5555] onStateChange(PooledConnection{channel=[id: 0x054ae908, L:/127.0.0.1:50560 - R:localhost/127.0.0.1:5555]}, [connected])
10:01:02.562 [reactor-tcp-nio-3] INFO reactor.netty.tcp.TcpServerTests - Client Connection State: [id: 0x054ae908, L:/127.0.0.1:50560 - R:localhost/127.0.0.1:5555] // [connected] // reactor-tcp-nio-3
10:01:02.569 [reactor-tcp-nio-3] DEBUG r.n.r.DefaultPooledConnectionProvider - [054ae908, L:/127.0.0.1:50560 - R:localhost/127.0.0.1:5555] onStateChange(ChannelOperations{PooledConnection{channel=[id: 0x054ae908, L:/127.0.0.1:50560 - R:localhost/127.0.0.1:5555]}}, [configured])
10:01:02.570 [reactor-tcp-nio-4] DEBUG reactor.netty.tcp.TcpServer - [3d8e3123, L:/127.0.0.1:5555 - R:/127.0.0.1:50560] Handler is being applied: reactor.netty.tcp.TcpServerTests$$Lambda$407/1621513804@55d2037f
10:01:02.570 [reactor-tcp-nio-3] INFO reactor.Mono.Create.2 - onNext(ChannelOperations{PooledConnection{channel=[id: 0x054ae908, L:/127.0.0.1:50560 - R:localhost/127.0.0.1:5555]}})
10:01:02.570 [reactor-tcp-nio-4] INFO reactor.netty.tcp.TcpServerTests - ******** Server receive ChannelOperations{SimpleConnection{channel=[id: 0x3d8e3123, L:/127.0.0.1:5555 - R:/127.0.0.1:50560]}}
10:01:02.570 [reactor-tcp-nio-3] INFO reactor.Mono.Create.2 - onComplete()
10:01:02.570 [reactor-tcp-nio-3] INFO reactor.netty.tcp.TcpServerTests - Client Connection State: [id: 0x054ae908, L:/127.0.0.1:50560 - R:localhost/127.0.0.1:5555] // [configured] // reactor-tcp-nio-3
from reactor-netty.
I may have not explained the issue correctly - I intentionally add a delay to simulate a firing of the Timeout before the connection is delivered, to try and see if there is a race condition between connect() delivering a connection, Timeout firing, and the connection not being aborted.
Even when the doOnCancel fires, I still see this connection hanging around in configured state, taking a slot in the connection pool.
from reactor-netty.
The doOnCancel I just added for some observability into the abort of the connection
from reactor-netty.
@jkippen With the current example I cannot reproduce what you are explaining. It seems that you can, so please provide logs from an execution where you see the behaviour that you are explaining.
from reactor-netty.
I added a couple screenshots from my client and server logs, is it visible?
from reactor-netty.
Here's smaller snippet of code if it helps. In the client log I attached above, the first exception is the Timeout firing, and the second is the acquire timeout from the second connect() attempt.
ConnectionProvider provider = ConnectionProvider.create("test", 1, true);
/// This connect times out, leaves a connection in configured state
Mono
.defer(() -> client
.connect()
.delayElement(Duration.ofMillis(1000))
.timeout(Duration.ofMillis(900))
.flatMap(connection -> sendAndReceive(connection))
).block();
/// This connect cannot acquire a connection
Mono
.defer(() -> client
.connect()
.flatMap(connection -> sendAndReceive(connection))
.doOnError(e -> {
e.printStackTrace();
})
.onErrorComplete()
)
.block();
from reactor-netty.
oh I see what you're saying - I removed some stuff to share cleaned up code... will share again in a moment
from reactor-netty.
I have a feeling this issue is very similar to this one from reactor-pool. Please have a look at my comment, perhaps using .doOnDiscard()
you will be able to fix it.
from reactor-netty.
I want to see logs from connect()
, whether we delivered the connection or not. So @jkippen please add .log()
after the connect()
I need to know whether the issue is a cancelation while the connection still is not delivered or after that.
from reactor-netty.
Okay, here's the code for the first connect() attempt that times out (I had removed onErrorComplete - will add log and send the output shortly
// timeout via reactive
Mono
.defer(() -> client
.connect()
.doOnCancel(() -> log.debug("********* cancelling"))
.delayElement(Duration.ofMillis(Random.from(RandomGenerator.getDefault()).nextInt(900, 1000)))
.timeout(Duration.ofMillis(9))
.flatMap(connection -> sendAndReceive(connection))
.doOnError(e -> {
e.printStackTrace();
})
.onErrorComplete()
).block();
from reactor-netty.
Here's the log with log() output
from reactor-netty.
@chemicL to be honest, I'm assuming that using the connect timeout on the TcpClient itself would avoid the problem (though I'm not sure, and not sure how to validate that). Even so (and I don't know much about Reactor), if the responsibility of timeout(..) is to cancel the mono, shouldn't it abort the connection regardless of whether it was emitted from connect() or not?
from reactor-netty.
In my example, if you change the timeout to something closer to the delay (i.e. 900ms) the doOnCancel also doesn't run. In the output of my last test, doOnCancel did run (the timeout was 9ms)
Here's an output of the log when doOnCancel doesn't run, if it's helpful
from reactor-netty.
@jkippen One additional question - you are testing with this ancient version -> implementation platform('io.projectreactor:reactor-bom:2020.0.9')
or with the latest releases?
from reactor-netty.
Ah - I was using the ancient version. However, I updated my gradle version to io.projectreactor:reactor-bom:2023.0.4, cleaned my workspace, broke my workspace, fixed it, and retested. Got the same result.
from reactor-netty.
@jkippen I agree with the comment made by @chemicL. You need to handle cancellation and error events. From Reactor Netty point of view it is dangerous to handle these events and just close the connection, We don't know what kind of protocol is implemented and whether this protocol needs to do some cleanup in these situations.
from reactor-netty.
Related Issues (20)
- Support For Spnego Auth scheme support for netty HttpClient similar to Apache's HOT 3
- HTTP/1.1 is being logged and recorded by metrics even though HTTP/2 is enabled HOT 10
- proxy-authorization header is not propagated to proxy server HOT 9
- Support for Spengo auth scheme support for netty HttpClient similar to Apache HttpClient HOT 2
- Spring Cloud Gateway use reactor netty :Response header and response body packet capture were found to be separate HOT 1
- CompletableFuture is cancelled because Netty is closing the connection channel HOT 5
- WebSockets - MaxFramePayloadLength behaviour when using WebSocketServerSpec.compress(true) HOT 2
- ERROR io.netty.util.ResourceLeakDetector - LEAK: ByteBuf.release() was not called before it's garbage-collected. HOT 3
- HTTP2 Client doesn't work/DefaultChannelPipeline wrongly configure for HTTP2 HOT 2
- Connection reset by peer; nested exception is io.netty.channel.unix.Errors$NativeIoException: readAddress(..) HOT 2
- springcloud gateway netty pool config HOT 1
- LEAK: ByteBuf.release() was not called before it's garbage-collected. HOT 17
- ClassNotFoundException for HttpDecoderConfig using reactor-netty-http HOT 3
- Mutable NewConnectionProvider HOT 10
- ByteBuf LEAK when PoolAcquirePendingLimitException is thrown HOT 9
- Consistent Memory Increase in Webflux Application HOT 5
- Build should not depend on `doc:antora`
- Ongoing connection reset by peer HOT 6
- Memory Leak in reactor-netty when using Spring WebClient with blocked flux. Issue: Databuffer is not released. HOT 15
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from reactor-netty.