Comments (26)
Glad to hear. I didn't expect that the default value's increase would have such a negative impact, I will need to do further experiments and perhaps change either the value or the related algorithm to better handle both cases (small bodies and large bodies). Might be worth looking at what browsers are doing too.
from gun.
Glad to hear. I didn't expect that the default value's increase would have such a negative impact, I will need to do further experiments and perhaps change either the value or the related algorithm to better handle both cases (small bodies and large bodies). Might be worth looking at what browsers are doing too.
Another example
So you see 2.0.1 with not extra options gives really downgrade response time
from gun.
Best I redirect you to the spec, see https://datatracker.ietf.org/doc/html/rfc7540#section-6.9.1 for full details about flow control. The two options are what Gun will set for the initial values for the flow control window. Then Gun has an algorithm that ensures there's always some space in the window, see cow_http2_machine:ensure_window for the implementation.
from gun.
Hello, I'm not sure what you mean by "downgrade" here or what you're measuring exactly. I don't think there's any changes that require configuration although if you're connecting over HTTP/2 some settings are best tweaked.
from gun.
Response time from services has increased, I have changed only 1.3.0 to 2.0.1 without any settings, and I have got worst percentiles. No any changes except gun version.
I can show graphics.
from gun.
I need a way to reproduce but having data would help understand what this is about yes.
from gun.
There are 2 services on AWS + http2 with gun, response from 2nd service grew after update 1st service to 2.0.1 gun
20:00 update to 2.0.1.
from gun.
So in % BEFORE
AFTER
No any changes except update client to new gun
from gun.
4194682 was meant to improve HTTP/2 performance when receiving larger bodies. But perhaps this had a negative impact in your case.
What is the size of the body you send, and what is the size of the body you receive (roughly)?
Another change that was done is send_timeout
is now enabled by default.
There may be a few other things. If you can try different Gun commits it could help identify when things started getting worse. I could provide a few interesting commits to upgrade to and see what happens.
from gun.
received_bytes sent_bytes
4004 4687
586 735
588 737
149 296
992 1213
200 299
592 740
6036 7050
568 716
1110 1324
265 366
330 454
1759 2071
149 296
825 1003
149 294
581 722
187 285
270 368
184 283
211 301
287 384
828 1003
211 301
998 1213
4123 4746
630 750
669 817
283 384
149 295
149 296
149 295
1072 1286
479 587
263 361
4815 5518
149 295
149 295
265 366
149 295
1048 1236
149 295
2676 2969
2135 2067
1108 1319
517 661
149 294
671 838
149 297
149 297
265 366
149 295
479 587
1581 1789
225 324
from gun.
Yeah small. So it's possible the higher default is causing trouble for the server. Try setting the http2_opts
for initial_connection_window_size
and initial_stream_window_size
to their default values (65535). Or if you don't want to upgrade to test, try changing the default in 1.3.0 to 8000000 and see if that makes things worse.
from gun.
And another
received_bytes sent_bytes
558 2193
867 2983
343 1959
872 2997
868 3152
379 548
868 3188
869 3244
195 718
417 2354
493 683
676 2415
208 270
498 2838
871 3292
522 2471
634 2351
204 503
645 1326
343 1798
557 2113
923 3681
340 1797
198 709
177 272
197 722
618 2760
535 2502
494 684
642 2375
372 914
496 686
372 546
375 893
197 783
198 703
509 2728
198 718
721 3105
721 3033
557 2135
536 2729
535 2744
from gun.
Yeah small. So it's possible the higher default is causing trouble for the server. Try setting the
http2_opts
forinitial_connection_window_size
andinitial_stream_window_size
to their default values (65535). Or if you don't want to upgrade to test, try changing the default in 1.3.0 to 8000000 and see if that makes things worse.
But I don't see
initial_stream_window_size
initial_connection_window_size
in gun options
-type opts() :: #{
connect_timeout => timeout(),
http_opts => http_opts(),
http2_opts => http2_opts(),
protocols => [http | http2],
retry => non_neg_integer(),
retry_timeout => pos_integer(),
trace => boolean(),
transport => tcp | tls | ssl,
transport_opts => [gen_tcp:connect_option()] | [ssl:connect_option()],
ws_opts => ws_opts()
}.
and
http2_opts
-type http2_opts() :: #{
keepalive => timeout()
}.
from gun.
Right it's not available in 1.3, sorry. I guess the only way to properly test is upgrading to 2.0 and set the option there to 65535.
#{ http2_opts => #{ initial_connection_window_size => 65535, initial_stream_window_size => 65535 }}
from gun.
Right it's not available in 1.3, sorry. I guess the only way to properly test is upgrading to 2.0 and set the option there to 65535.
#{ http2_opts => #{ initial_connection_window_size => 65535, initial_stream_window_size => 65535 }}
Thank's! I'll try it later!
Do I have to add additional options to
#{
http2_opts => #{
keepalive => 60 * 1000,
initial_connection_window_size => 65535, initial_stream_window_size => 65535
},
connect_timeout => 5000,
retry => 100,
retry_timeout => 1000 % 1s
},
except of yours?
from gun.
Go with that for now and let's see.
from gun.
initial_connection_window_size => 65535, initial_stream_window_size => 65535
It really works! Much better with options!
from gun.
Maybe you can advise how to correct choose these values? What's the rule?
from gun.
I have a service which works correct with default gun, I'll give you sizes of bodies on Monday. It's really interesting, no any downgrade.
from gun.
Maybe you can advise how to correct choose these values? What's the rule?
It's just a control for how much memory you are willing to accept using. With the downside that the lower the value (and the lower the memory usage) the lower the performance. Default Gun has those values to 8MB to favor performance when having large bodies. But in Gun setting this value to 8MB has no effect on its own, no buffer gets allocated immediately, it just means there can be roughly 8MB in transit at once between your application and the server.
Clearly the server you are connected to doesn't seem to like that though. Maybe on your service that runs well you are connected to a different server. Or perhaps it's an issue related to shared resources or bandwidth. Figuring out the real cause is likely to be difficult.
from gun.
Maybe you can advise how to correct choose these values? What's the rule?
It's just a control for how much memory you are willing to accept using. With the downside that the lower the value (and the lower the memory usage) the lower the performance. Default Gun has those values to 8MB to favor performance when having large bodies. But in Gun setting this value to 8MB has no effect on its own, no buffer gets allocated immediately, it just means there can be roughly 8MB in transit at once between your application and the server.
Clearly the server you are connected to doesn't seem to like that though. Maybe on your service that runs well you are connected to a different server. Or perhaps it's an issue related to shared resources or bandwidth. Figuring out the real cause is likely to be difficult.
So
initial_connection_window_size - is it a max size of body for all stream at once time or what?
initial_stream_window_size - is it a max size for 1 stream?
from gun.
Morning) Now I get new error
{badmap,{'EXIT',{{badmatch,{error,{stream_error,{closed,{error,closed}}}}}
And stacktrace shows a line with
{ok, Body} = gun:await_body(ConnPid, StreamRef, ?TIMEOUT),
Maybe another additional options are needed?
from gun.
That just indicates the server closed a connection. Please open a separate ticket with the stacktrace.
from gun.
That just indicates the server closed a connection. Please open a separate ticket with the stacktrace.
It’s already open
#291
actually that’s why I switched back to gun 1.3.0
from gun.
Have you tried to using tcp_opts => [{nodelay, true}]
in gun:opts() ???
The way the HTTP/2 handler is implemented in gun means that gun will use two seperate gen_tcp:send for every HTPP/2 requests, one send for the header frame and one for the data frame. This leads to two TCP fragments being send and that pattern then triggers a bad interaction between the TCP Nagle algorithm and TCP delayed ACK.
The observable effect is typically a 40ms delay in the request.
from gun.
Have you tried to using
tcp_opts => [{nodelay, true}]
in gun:opts() ???The way the HTTP/2 handler is implemented in gun means that gun will use two seperate gen_tcp:send for every HTPP/2 requests, one send for the header frame and one for the data frame. This leads to two TCP fragments being send and that pattern then triggers a bad interaction between the TCP Nagle algorithm and TCP delayed ACK.
The observable effect is typically a 40ms delay in the request.
No, I haven't. I used initial_connection_window_size/initial_stream_window_size and it helped with reduce of response time.
from gun.
Related Issues (20)
- gun:connect expects proxy server to reply with HTTP/1.1, some servers respond with HTTP/1.0 HOT 3
- gun 2.0 RC ready? HOT 7
- HTTP2 gun_down event only delivered the connection owner but not to streams HOT 2
- Exposing HTTP2 "additional debug data" in received GOAWAY frames HOT 4
- [Bug] shutdown sends GOAWAY with reason internal error HOT 4
- Crash during termination when connecting to a server requiring mTLS over HTTP/2 HOT 17
- Feature: Response callback fun HOT 4
- Add stream idle timeout HOT 2
- Websocket upgrade fails on unix socket HOT 4
- function not exported {gun_http,ws_send,6} HOT 3
- client_preferred_next_protocols is incompatible with TLS 1.3 HOT 1
- Types not exported - dialyzer fails
- timeout to connect to ws.postman-echo.com:443 HOT 6
- {:stream_error, :protocol_error, :"Stream reset by server."} HOT 5
- gun:open times out in OTP/26 for tls if tls options are not set HOT 9
- Update erlang.mk to support OTP 27 HOT 1
- Document gotchas using Gun from multiple entities using HTTP/2
- Connection process stopped handling requests HOT 11
- Can this client (or any?) make multiple concurrent outbound requests? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from gun.