Coder Social home page Coder Social logo

Comments (11)

Dreamsorcerer avatar Dreamsorcerer commented on May 20, 2024

I don't think it's specific to gunicorn. I suspect because you are hammering it with requests, you manage to catch an edge case where a request is coming in during shutdown.

I'm short on time right now, but if you'd like to try and debug, start by adding print() into aiohttp.web_server.Server.connection_made()/pre_shutdown() methods.

My suspicion is that you might find a call to connection_made() happening after pre_shutdown() has happened.

If that turns out to be correct, then try adding an await asyncio.sleep(0) before the pre_shutdown() call in aiohttp.web_runner.BaseRunner.cleanup() and see if that causes the connection_made() to happen before.

from aiohttp.

NotSqrt avatar NotSqrt commented on May 20, 2024

I confirm that connection_made can be called after pre_shutdown of the same process.

With your suggestion, I no longer have exceptions on the server logs (but I still have some dropped requests : potentially an edge case for gunicorn when all processes are simultaneously booting up, despite their backog option).

         if self._server:  # If setup succeeded
+            await asyncio.sleep(0)
             self._server.pre_shutdown()

from aiohttp.

Dreamsorcerer avatar Dreamsorcerer commented on May 20, 2024

Alright, if you can figure out a test to add in a PR to reproduce, that'd be great (existing tests for shutdown are in https://github.com/aio-libs/aiohttp/blob/master/tests/test_run_app.py#L924). Otherwise, I'll try and give it a go when I have some time.

For the dropped connections, it could be that the connection has been established, but it hasn't yet started the handler, causing the server to close it again without actually handling it? Try changing the 0 to a higher number (0.1 maybe) and see if that makes a difference. If so, you could start debugging further by looking around this code:

while not self._force_close:
if not self._messages:
try:
# wait for next request
self._waiter = loop.create_future()
await self._waiter
except asyncio.CancelledError:
break
finally:
self._waiter = None

or
if self._force_close or self._close:
return

But, it's not so clear to me how we'd fix this one cleanly.

from aiohttp.

NotSqrt avatar NotSqrt commented on May 20, 2024

Well, this is clearly out of my league..

I've tried looking how to cleanly stop an asyncio server that was started with loop.create_server.
Ideally, when the gunicorn worker reaches max_requests, it should stop accepting new connections and do a proper shutdown

What I've seen is that the RequestHandler.start method increments manager.requests_count at each request, while the GunicornWebWorker checks that value only in a loop with _wait_next_notify and call_later once per second.
So clearly gunicorn options --max-requests and --max-requests-jitter won't be strictly respected.

It seems quite hard and implementation-specific to stop a server from accepting new connections..
I've seen an ongoing discussion here : python/cpython#113538 and https://stackoverflow.com/questions/43860469/long-running-asyncio-server-reload-certificate

from aiohttp.

Dreamsorcerer avatar Dreamsorcerer commented on May 20, 2024

What I've seen is that the RequestHandler.start method increments manager.requests_count at each request, while the GunicornWebWorker checks that value only in a loop with _wait_next_notify and call_later once per second. So clearly gunicorn options --max-requests and --max-requests-jitter won't be strictly respected.

Yes, I already noticed that, it's only an estimate, but that's likely all it needs to be (to be honest, I'm not sure I even see the point of the option, it seems like a PHP thing where memory leaks and bad code are rather common).

It seems quite hard and implementation-specific to stop a server from accepting new connections.. I've seen an ongoing discussion here : python/cpython#113538 and https://stackoverflow.com/questions/43860469/long-running-asyncio-server-reload-certificate

I think, as Guido says at the beginning of that thread, it's up to us to track the connections. Which is basically what we do. We don't want to actually force close the server, but gracefully close each connection.

My assumption is that once we close the server from accepting incoming requests, if we yield to the event loop once, then any connections that were incoming will have their connection_made() callbacks run. That seems to be true, given that the exceptions you encountered disappeared after an await asyncio.sleep(0).

Then when we try to close the connections, the reason active connections don't close automatically is in the code linked above. Specifically, the expected behaviour is that an idle connection will be waiting at:

await self._waiter

That waiter gets cancelled, which then breaks out of the loop and the connection is closed.

An active connection will be later in that loop and awaiting on the handler task:

resp, reset = await task

In this case, it just gets an _close attribute set which tells it to break out of the loop at the end (so it doesn't become an idle connection again).

My guess is that the problem occurs with the connection_made() getting called, but the code then yielding to the event loop before it reaches that loop and starts the handler task (or after entering the loop, but before data is received and it's woken up from the waiter?). At which point the connection is closed, and then the code resumes, sees that the connection should be closed and never starts the handler task. Again at a guess, it may be the data_received() callback that should initiate this, but because the connection is already labelled as closed, it just returns and skips processing anything:

if self._force_close or self._close:
return

This is why I asked if increasing the sleep() call makes a difference, as it would allow some time for data_received() to be called properly, and would suggest that I'm along the right lines.

from aiohttp.

NotSqrt avatar NotSqrt commented on May 20, 2024

Increasing the sleep to 0.1 or even 0.5 seems to improve a little the stability, but dropped requests can still happen.

from aiohttp.

Dreamsorcerer avatar Dreamsorcerer commented on May 20, 2024

That could just be the delay to start every handler, depending on how many requests are queued up in that time.
What I'm thinking is that maybe we need to track if the connection is a new connection or not, and when it is a new connection allow it 1 second or similar to receive the request data from the client. If no data is received in that time, then we close the connection.

from aiohttp.

Dreamsorcerer avatar Dreamsorcerer commented on May 20, 2024

I don't think I can provide a full fix, it seems to be down to network race conditions with keep-alive connections.

At best, if everything else works perfectly, we can still have a situation where the server closes an idle keep-alive connection while there is a request in transit from the client. I don't think there's a way to cancel a close, so there's no way to avoid this situation. This is expected behaviour for HTTP, and for idempotent methods, Gunicorn should probably automatically retry the connection (maybe on a different process) without giving any errors to the end user:
https://www.rfc-editor.org/rfc/rfc9112.html#name-retrying-requests

from aiohttp.

Dreamsorcerer avatar Dreamsorcerer commented on May 20, 2024

I also can't reproduce the server-side warnings in a test...
I think we'll just add in the sleep(0) for now.

from aiohttp.

Dreamsorcerer avatar Dreamsorcerer commented on May 20, 2024

If anyone wants to play with it, this was my attempt at a test, but I get the same result before and after the change (add it to the bottom of test_run_app.py):

    def test_shutdown_while_incoming_connections(
        self, aiohttp_unused_port: Callable[[], int]
    ) -> None:
        port = aiohttp_unused_port()
        results = ()

        async def test() -> None:
            await asyncio.sleep(1)
            async with ClientSession(connector=TCPConnector(limit=10)) as sess:
                async def req(path: str):
                    try:
                        async with sess.get(f"http://localhost:{port}{path}") as resp:
                            return await resp.text()
                    except Exception as exc:
                        return exc.__class__

                tasks = (req("/stop"), *(req("/") for _ in range(100)))
                nonlocal results
                stop, *results = await asyncio.gather(*tasks)

        async def handler(request: web.Request) -> web.Response:
            return web.Response(text="FOO")

        async def run_test(app: web.Application) -> None:
            nonlocal t
            t = asyncio.create_task(test())
            yield
            t.cancel()
            with contextlib.suppress(asyncio.CancelledError):
                await t

        t = None
        app = web.Application()
        app.cleanup_ctx.append(run_test)
        app.router.add_get("/", handler)
        app.router.add_get("/stop", self.stop)

        web.run_app(app, port=port, shutdown_timeout=10)
        assert len(results) == 100
        assert set(results) == {"FOO", ClientConnectorError}

from aiohttp.

Dreamsorcerer avatar Dreamsorcerer commented on May 20, 2024

I think, also, if integration with Gunicorn can allow it be alerted when the server is initiating a shutdown, then Gunicorn could also just stop forwarding requests to that process immediately and avoid any dropped connections that way. I'm not familiar with what's available in Gunicorn, so something worth investigating if you're keen to improve on this.

from aiohttp.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.