Coder Social home page Coder Social logo

nest_asyncio's People

Contributors

5j9 avatar bollwyvl avatar crusaderky avatar diego-plan9 avatar dsblank avatar erdewit avatar gsmecher avatar hugovk avatar kolanich avatar stnatter avatar unkcpz avatar uriyyo avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nest_asyncio's Issues

Attempt to access websocket outside of a relevant context

RuntimeError: Attempt to access websocket outside of a relevant context
Attempt to access websocket outside of a relevant context
Traceback (most recent call last):
  File "D:\SmartBot\my_env\SmartBot\telegram.py", line 888, in wa2tg2
    data = await websocket.receive()
  File "C:\Users\Smart\AppData\Local\Programs\Python\Python37-32\lib\site-packages\quart\local.py", line 115, in __getattr__
    return getattr(self._get_current_object(), name)
  File "C:\Users\Smart\AppData\Local\Programs\Python\Python37-32\lib\site-packages\quart\local.py", line 84, in _get_current_object
    return object.__getattribute__(self, '__LocalProxy_local')()
  File "C:\Users\Smart\AppData\Local\Programs\Python\Python37-32\lib\site-packages\quart\globals.py", line 14, in _ctx_lookup
    raise RuntimeError(f"Attempt to access {name} outside of a relevant context")

In quart and telethon

Error when running multiple Jupyter cells: ERROR! Session/line number was not unique in database

I am running multiples cells, where each of them run an async functions, and I want the cells to run consecutively, so I am using the function loop.run_until_complete to make sure each cell completes before running the next. Here is an example

import asyncio
import nest_asyncio
from timeit import default_timer

nest_asyncio.apply()

async def print_loop():
  print('{0:<30}{1:>20}'.format('Function', 'Completed at'))
  for i in range(3):
    elapsed = default_timer() - START_TIME
    completed_at = '{:5.2f}s'.format(elapsed)
    print('{0:<30}{1:>20}'.format('[LOOP] %d' % i, completed_at))
    time.sleep(0.5)

START_TIME = defualt_timer()

# This code will go on each cell
loop = asyncio.get_event_loop()
task = asyncio.ensure_future(print_loop())
loop.run_until_complete(task)

The run_until_complete blocks the next cell, but when running 3 or more cells I get the error
ERROR! Session/line number was not unique in database. History logging moved to new session
And the output start appearing in differents cells

bugger_init
And I get the following
bugged_cells

AttributeError: 'Loop' object has no attribute '_check_closed'

Hi Ewald.

I fallen in love with ib_insync several months ago. Now I look for a solution to nested loops, I find this amazing package, and it's yours too. Great Ewald !

I'm trying to use with aiogram and my own async code. But I get this error. I'm using Python 3.7.2.

Could you help me? Thanks for your brilliant work.

...
  File "/home/argante/anaconda2/Proyectos 8/beagledwx/dwx.py", line 51, in data_request
    data = list_of_dict_to_dict(loop.run_until_complete(future))
  File "/home/argante/anaconda2/envs/python372/lib/python3.7/site-packages/nest_asyncio.py", line 53, in run_until_complete
    self._check_closed()
AttributeError: 'Loop' object has no attribute '_check_closed'

Task not executing

Say you have a file foo.py:

import asyncio

async def foo():
    print('foo')
    await asyncio.sleep(0)
    print('done')

loop = asyncio.get_event_loop()
task = asyncio.ensure_future(foo())
loop.run_until_complete(task)

And a Notebook with this cell:

import nest_asyncio
nest_asyncio.apply()

import foo

Executing the cell just hangs, foo() is not executed.

Document tradeoffs

Please add some explanation on what tradeoffs one takes on when using this library.

For example from very briefly looking at the code it seems that using this forces pure Python asyncio code to used.
Is there a performance (or other) impact?

How can things break with nested loops?

Thanks.

Can't patch loop of type <class 'uvloop.Loop'>

Hi thanks for this great library.
I got an error when i use nest_asyncio in python3.8 "Can't patch loop of type <class 'uvloop.Loop'>", uninstall the uvloop can solve this problem. it seems that the uvloop is faster, so, is there any plan to support the uvloop?

Ready queue mutation, IndexError: pop from an empty deque

Hey,

First, thanks for sharing the patch and publishing this library. I'm trying to workaround re-entrancy while converting a codebase incrementally to asyncio, where there are call chains of async > loop.run_until_complete(async).

I've hit an issue with the patch with handles from the _ready queue seemingly being popped from elsewhere.
I.e. from python3.6/asyncio/base_events.py popping from the ready queue throws, because it iterates too far.

        ntodo = len(self._ready)
        for i in range(ntodo):
            handle = self._ready.popleft() # throws.
            if handle._cancelled:
                continue

The full exception stack trace looks like this:

return aio_batcher.run_until_complete(aio_batcher.evaluate_coroutines(json_data))
  File "/Users/cory/src/nd/util/aio_batcher/decorators.py", line 50, in run_until_complete
    return loop.run_until_complete(awaitable)
  File "/var/lib/conda/envs/nextdoor3/lib/python3.6/site-packages/nest_asyncio.py", line 63, in run_until_complete
    return self._run_until_complete_orig(future)
  File "/var/lib/conda/envs/nextdoor3/lib/python3.6/asyncio/base_events.py", line 454, in run_until_complete
    self.run_forever()
  File "/var/lib/conda/envs/nextdoor3/lib/python3.6/asyncio/base_events.py", line 421, in run_forever
    self._run_once()
  File "/var/lib/conda/envs/nextdoor3/lib/python3.6/asyncio/base_events.py", line 1411, in _run_once
    handle = self._ready.popleft()
IndexError: pop from an empty deque

My hypothesis is that one of the enqueued items triggers the nest_async, or else an enqueued item is processing other elements from the queue.

It is the worst kind of problem where it is reproducible, but I have yet to isolate a simple test case to make it easy to understand. Do you have a hunch of what might be happening? Perhaps something you ran into while writing the patch?

Thanks for your help,
Cory

timeout

Hi there.

I was wondering where the choice of 10 came from for the timeout variable in nest_asyncio.py.

        timeout = 0 if ready or self._stopping \
            else min(max(0, scheduled[0]._when - now), 10) if scheduled \
            else None

To get my code to work I had to increment this slightly.

Thanks,
Simon.

AttributeError: module 'sys' has no attribute 'get_asyncgen_hooks'

I'm running an application built on Python 3.5.3 using nest_asyncio and when running the main python application I get the following error stack:

File "C:\Users\tez\AppData\Local\Programs\Python\Python35\lib\site-packages\nest_asyncio.py", line 51, in run_forever
with manage_run(self), manage_asyncgens(self):
File "C:\Users\tez\AppData\Local\Programs\Python\Python35\lib\contextlib.py", line 59, in enter
return next(self.gen)
File "C:\Users\tez\AppData\Local\Programs\Python\Python35\lib\site-packages\nest_asyncio.py", line 132, in manage_asyncgens
old_agen_hooks = sys.get_asyncgen_hooks()
AttributeError: module 'sys' has no attribute 'get_asyncgen_hooks'

Any idea if it's a versioning issue or what might be wrong?

nest_asyncio.apply() cause pyppeteer hangs and cpu 100%

import nest_asyncio;nest_asyncio.apply()  # add this line,  program hangs
import pyppeteer,asyncio
w8="ws://127.0.0.1:9222/devtools/browser/9a23f54e-8338-4166-9f3c-bbc6629c17a4"
loop=asyncio.get_event_loop()
connection =pyppeteer.connection.Connection(w8,loop,0)	   

async def af():
	r=await connection.send('Target.getBrowserContexts')  
	print(r)
	return
loop.run_until_complete(af())

when I press ctrl+c, traceback is

>>python C:/test/atest.py
Traceback (most recent call last):
  File "C:/test/atest.py", line 34, in <module>
    loop.run_until_complete(af())
  File "C:\QGB\Anaconda3\lib\site-packages\nest_asyncio.py", line 89, in run_until_complete
    self._run_once()
  File "C:\QGB\Anaconda3\lib\site-packages\nest_asyncio.py", line 123, in _run_once
    handle._run()
  File "C:\QGB\Anaconda3\lib\site-packages\nest_asyncio.py", line 190, in run
    ctx.run(self._callback, *self._args)
  File "C:\QGB\Anaconda3\lib\site-packages\nest_asyncio.py", line 148, in step
    step_orig(task, exc)
  File "C:\QGB\Anaconda3\lib\asyncio\tasks.py", line 249, in __step
    result = coro.send(None)
  File "C:\QGB\Anaconda3\lib\site-packages\pyppeteer\connection.py", line 70, in _async_send
    while not self._connected:
KeyboardInterrupt
2020-11-29:10:51:53,257 ERROR    [base_events.py:1604] Task exception was never retrieved

1.5.1: test is failing in project unit

Just normal build, install and test cycle used on building package from non-root account:

  • "setup.py build"
  • "setup.py install --root </install/prefix>"
  • "pytest with PYTHONPATH pointing to setearch and sitelib inside </install/prefix>
+ PYTHONPATH=/home/tkloczko/rpmbuild/BUILDROOT/python-nest_asyncio-1.5.1-2.fc35.x86_64/usr/lib64/python3.8/site-packages:/home/tkloczko/rpmbuild/BUILDROOT/python-nest_asyncio-1.5.1-2.fc35.x86_64/usr/lib/python3.8/site-packages
+ PYTHONDONTWRITEBYTECODE=1
+ /usr/bin/pytest -ra
=========================================================================== test session starts ============================================================================
platform linux -- Python 3.8.9, pytest-6.2.4, py-1.10.0, pluggy-0.13.1
benchmark: 3.4.1 (defaults: timer=time.perf_counter disable_gc=False min_rounds=5 min_time=0.000005 max_time=1.0 calibration_precision=10 warmup=False warmup_iterations=100000)
rootdir: /home/tkloczko/rpmbuild/BUILD/nest_asyncio-1.5.1
plugins: forked-1.3.0, shutil-1.7.0, virtualenv-1.7.0, expect-1.1.0, httpbin-1.0.0, flake8-1.0.7, timeout-1.4.2, betamax-0.8.1, freezegun-0.4.2, case-1.5.3, isort-1.3.0, aspectlib-1.5.2, asyncio-0.15.1, toolbox-0.5, xprocess-0.17.1, aiohttp-0.3.0, checkdocs-2.7.0, mock-3.6.1, rerunfailures-9.1.1, requests-mock-1.9.3, cov-2.12.1, pyfakefs-4.5.0, cases-3.6.1, flaky-3.7.0, hypothesis-6.14.0, benchmark-3.4.1, xdist-2.3.0, Faker-8.8.1
collected 9 items

. F                                                                                                                                                                  [ 14%]
tests/nest_test.py ......                                                                                                                                            [100%]

================================================================================= FAILURES =================================================================================
_______________________________________________________________________________ test session _______________________________________________________________________________

cls = <class '_pytest.runner.CallInfo'>, func = <function call_runtest_hook.<locals>.<lambda> at 0x7f589f0019d0>, when = 'call'
reraise = (<class '_pytest.outcomes.Exit'>, <class 'KeyboardInterrupt'>)

    @classmethod
    def from_call(
        cls,
        func: "Callable[[], TResult]",
        when: "Literal['collect', 'setup', 'call', 'teardown']",
        reraise: Optional[
            Union[Type[BaseException], Tuple[Type[BaseException], ...]]
        ] = None,
    ) -> "CallInfo[TResult]":
        excinfo = None
        start = timing.time()
        precise_start = timing.perf_counter()
        try:
>           result: Optional[TResult] = func()

/usr/lib/python3.8/site-packages/_pytest/runner.py:311:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

>       lambda: ihook(item=item, **kwds), when=when, reraise=reraise
    )

/usr/lib/python3.8/site-packages/_pytest/runner.py:255:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <_HookCaller 'pytest_runtest_call'>, args = (), kwargs = {'item': <CheckdocsItem project>}, notincall = set()

    def __call__(self, *args, **kwargs):
        if args:
            raise TypeError("hook calling supports only keyword arguments")
        assert not self.is_historic()
        if self.spec and self.spec.argnames:
            notincall = (
                set(self.spec.argnames) - set(["__multicall__"]) - set(kwargs.keys())
            )
            if notincall:
                warnings.warn(
                    "Argument(s) {} which are declared in the hookspec "
                    "can not be found in this hook call".format(tuple(notincall)),
                    stacklevel=2,
                )
>       return self._hookexec(self, self.get_hookimpls(), kwargs)

/usr/lib/python3.8/site-packages/pluggy/hooks.py:286:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <_pytest.config.PytestPluginManager object at 0x7f58a5e1e250>, hook = <_HookCaller 'pytest_runtest_call'>
methods = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from '/usr/lib/python3.8/site-packages/_pytest/runner...=None>>, <HookImpl plugin_name='logging-plugin', plugin=<_pytest.logging.LoggingPlugin object at 0x7f589f402220>>, ...]
kwargs = {'item': <CheckdocsItem project>}

    def _hookexec(self, hook, methods, kwargs):
        # called from all hookcaller instances.
        # enable_tracing will set its own wrapping function at self._inner_hookexec
>       return self._inner_hookexec(hook, methods, kwargs)

/usr/lib/python3.8/site-packages/pluggy/manager.py:93:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

hook = <_HookCaller 'pytest_runtest_call'>
methods = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from '/usr/lib/python3.8/site-packages/_pytest/runner...=None>>, <HookImpl plugin_name='logging-plugin', plugin=<_pytest.logging.LoggingPlugin object at 0x7f589f402220>>, ...]
kwargs = {'item': <CheckdocsItem project>}

>   self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
        methods,
        kwargs,
        firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
    )

/usr/lib/python3.8/site-packages/pluggy/manager.py:84:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

hook_impls = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from '/usr/lib/python3.8/site-packages/_pytest/runner...=None>>, <HookImpl plugin_name='logging-plugin', plugin=<_pytest.logging.LoggingPlugin object at 0x7f589f402220>>, ...]
caller_kwargs = {'item': <CheckdocsItem project>}, firstresult = False

    def _multicall(hook_impls, caller_kwargs, firstresult=False):
        """Execute a call into multiple python functions/methods and return the
        result(s).

        ``caller_kwargs`` comes from _HookCaller.__call__().
        """
        __tracebackhide__ = True
        results = []
        excinfo = None
        try:  # run impl and wrapper setup functions in a loop
            teardowns = []
            try:
                for hook_impl in reversed(hook_impls):
                    try:
                        args = [caller_kwargs[argname] for argname in hook_impl.argnames]
                    except KeyError:
                        for argname in hook_impl.argnames:
                            if argname not in caller_kwargs:
                                raise HookCallError(
                                    "hook call must provide argument %r" % (argname,)
                                )

                    if hook_impl.hookwrapper:
                        try:
                            gen = hook_impl.function(*args)
                            next(gen)  # first yield
                            teardowns.append(gen)
                        except StopIteration:
                            _raise_wrapfail(gen, "did not yield")
                    else:
                        res = hook_impl.function(*args)
                        if res is not None:
                            results.append(res)
                            if firstresult:  # halt further impl calls
                                break
            except BaseException:
                excinfo = sys.exc_info()
        finally:
            if firstresult:  # first result hooks return a single value
                outcome = _Result(results[0] if results else None, excinfo)
            else:
                outcome = _Result(results, excinfo)

            # run all wrapper post-yield blocks
            for gen in reversed(teardowns):
                try:
                    gen.send(outcome)
                    _raise_wrapfail(gen, "has second yield")
                except StopIteration:
                    pass

>           return outcome.get_result()

/usr/lib/python3.8/site-packages/pluggy/callers.py:208:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <pluggy.callers._Result object at 0x7f589efe3a00>

    def get_result(self):
        """Get the result(s) for this hook call.

        If the hook was marked as a ``firstresult`` only a single value
        will be returned otherwise a list of results.
        """
        __tracebackhide__ = True
        if self._excinfo is None:
            return self._result
        else:
            ex = self._excinfo
            if _py3:
>               raise ex[1].with_traceback(ex[2])

/usr/lib/python3.8/site-packages/pluggy/callers.py:80:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

hook_impls = [<HookImpl plugin_name='runner', plugin=<module '_pytest.runner' from '/usr/lib/python3.8/site-packages/_pytest/runner...=None>>, <HookImpl plugin_name='logging-plugin', plugin=<_pytest.logging.LoggingPlugin object at 0x7f589f402220>>, ...]
caller_kwargs = {'item': <CheckdocsItem project>}, firstresult = False

    def _multicall(hook_impls, caller_kwargs, firstresult=False):
        """Execute a call into multiple python functions/methods and return the
        result(s).

        ``caller_kwargs`` comes from _HookCaller.__call__().
        """
        __tracebackhide__ = True
        results = []
        excinfo = None
        try:  # run impl and wrapper setup functions in a loop
            teardowns = []
            try:
                for hook_impl in reversed(hook_impls):
                    try:
                        args = [caller_kwargs[argname] for argname in hook_impl.argnames]
                    except KeyError:
                        for argname in hook_impl.argnames:
                            if argname not in caller_kwargs:
                                raise HookCallError(
                                    "hook call must provide argument %r" % (argname,)
                                )

                    if hook_impl.hookwrapper:
                        try:
                            gen = hook_impl.function(*args)
                            next(gen)  # first yield
                            teardowns.append(gen)
                        except StopIteration:
                            _raise_wrapfail(gen, "did not yield")
                    else:
>                       res = hook_impl.function(*args)

/usr/lib/python3.8/site-packages/pluggy/callers.py:187:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

item = <CheckdocsItem project>

    def pytest_runtest_call(item: Item) -> None:
        _update_current_test_var(item, "call")
        try:
            del sys.last_type
            del sys.last_value
            del sys.last_traceback
        except AttributeError:
            pass
        try:
            item.runtest()
        except Exception as e:
            # Store trace info to allow postmortem debugging
            sys.last_type = type(e)
            sys.last_value = e
            assert e.__traceback__ is not None
            # Skip *this* frame
            sys.last_traceback = e.__traceback__.tb_next
>           raise e

/usr/lib/python3.8/site-packages/_pytest/runner.py:170:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

item = <CheckdocsItem project>

    def pytest_runtest_call(item: Item) -> None:
        _update_current_test_var(item, "call")
        try:
            del sys.last_type
            del sys.last_value
            del sys.last_traceback
        except AttributeError:
            pass
        try:
>           item.runtest()

/usr/lib/python3.8/site-packages/_pytest/runner.py:162:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <CheckdocsItem project>

    def runtest(self):
>       desc = self.get_long_description()

/usr/lib/python3.8/site-packages/pytest_checkdocs/__init__.py:29:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <CheckdocsItem project>

    def get_long_description(self):
>       return Description.from_md(ensure_clean(pep517.meta.load('.').metadata))

/usr/lib/python3.8/site-packages/pytest_checkdocs/__init__.py:60:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

root = '.'

    def load(root):
        """
        Given a source directory (root) of a package,
        return an importlib.metadata.Distribution object
        with metadata build from that package.
        """
        root = os.path.expanduser(root)
        system = compat_system(root)
        builder = functools.partial(build, source_dir=root, system=system)
>       path = Path(build_as_zip(builder))

/usr/lib/python3.8/site-packages/pep517/meta.py:71:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

builder = functools.partial(<function build at 0x7f58a2b64700>, source_dir='.', system={'requires': ['setuptools>=42', 'wheel', 'setuptools_scm[toml]>=3.4.3'], 'build-backend': 'setuptools.build_meta'})

    def build_as_zip(builder=build):
        with tempdir() as out_dir:
>           builder(dest=out_dir)

/usr/lib/python3.8/site-packages/pep517/meta.py:58:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

source_dir = '.', dest = '/tmp/tmpaotef6ft', system = {'build-backend': 'setuptools.build_meta', 'requires': ['setuptools>=42', 'wheel', 'setuptools_scm[toml]>=3.4.3']}

    def build(source_dir='.', dest=None, system=None):
        system = system or load_system(source_dir)
        dest = os.path.join(source_dir, dest or 'dist')
        mkdir_p(dest)
        validate_system(system)
        hooks = Pep517HookCaller(
            source_dir, system['build-backend'], system.get('backend-path')
        )

        with hooks.subprocess_runner(quiet_subprocess_runner):
            with BuildEnvironment() as env:
                env.pip_install(system['requires'])
>               _prep_meta(hooks, env, dest)

/usr/lib/python3.8/site-packages/pep517/meta.py:53:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

hooks = <pep517.wrappers.Pep517HookCaller object at 0x7f589ef11130>, env = <pep517.envbuild.BuildEnvironment object at 0x7f589ef114c0>, dest = '/tmp/tmpaotef6ft'

    def _prep_meta(hooks, env, dest):
>       reqs = hooks.get_requires_for_build_wheel({})

/usr/lib/python3.8/site-packages/pep517/meta.py:28:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <pep517.wrappers.Pep517HookCaller object at 0x7f589ef11130>, config_settings = {}

    def get_requires_for_build_wheel(self, config_settings=None):
        """Identify packages required for building a wheel

        Returns a list of dependency specifications, e.g.::

            ["wheel >= 0.25", "setuptools"]

        This does not include requirements specified in pyproject.toml.
        It returns the result of calling the equivalently named hook in a
        subprocess.
        """
>       return self._call_hook('get_requires_for_build_wheel', {
            'config_settings': config_settings
        })

/usr/lib/python3.8/site-packages/pep517/wrappers.py:168:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <pep517.wrappers.Pep517HookCaller object at 0x7f589ef11130>, hook_name = 'get_requires_for_build_wheel', kwargs = {'config_settings': {}}

    def _call_hook(self, hook_name, kwargs):
        # On Python 2, pytoml returns Unicode values (which is correct) but the
        # environment passed to check_call needs to contain string values. We
        # convert here by encoding using ASCII (the backend can only contain
        # letters, digits and _, . and : characters, and will be used as a
        # Python identifier, so non-ASCII content is wrong on Python 2 in
        # any case).
        # For backend_path, we use sys.getfilesystemencoding.
        if sys.version_info[0] == 2:
            build_backend = self.build_backend.encode('ASCII')
        else:
            build_backend = self.build_backend
        extra_environ = {'PEP517_BUILD_BACKEND': build_backend}

        if self.backend_path:
            backend_path = os.pathsep.join(self.backend_path)
            if sys.version_info[0] == 2:
                backend_path = backend_path.encode(sys.getfilesystemencoding())
            extra_environ['PEP517_BACKEND_PATH'] = backend_path

        with tempdir() as td:
            hook_input = {'kwargs': kwargs}
            compat.write_json(hook_input, pjoin(td, 'input.json'),
                              indent=2)

            # Run the hook in a subprocess
            with _in_proc_script_path() as script:
                python = self.python_executable
>               self._subprocess_runner(
                    [python, abspath(str(script)), hook_name, td],
                    cwd=self.source_dir,
                    extra_environ=extra_environ
                )

/usr/lib/python3.8/site-packages/pep517/wrappers.py:265:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

cmd = ['/usr/bin/python3', '/usr/lib/python3.8/site-packages/pep517/in_process/_in_process.py', 'get_requires_for_build_wheel', '/tmp/tmpdckmn8mo']
cwd = '/home/tkloczko/rpmbuild/BUILD/nest_asyncio-1.5.1', extra_environ = {'PEP517_BUILD_BACKEND': 'setuptools.build_meta'}

    def quiet_subprocess_runner(cmd, cwd=None, extra_environ=None):
        """A method of calling the wrapper subprocess while suppressing output."""
        env = os.environ.copy()
        if extra_environ:
            env.update(extra_environ)

>       check_output(cmd, cwd=cwd, env=env, stderr=STDOUT)

/usr/lib/python3.8/site-packages/pep517/wrappers.py:75:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

timeout = None, popenargs = (['/usr/bin/python3', '/usr/lib/python3.8/site-packages/pep517/in_process/_in_process.py', 'get_requires_for_build_wheel', '/tmp/tmpdckmn8mo'],)
kwargs = {'cwd': '/home/tkloczko/rpmbuild/BUILD/nest_asyncio-1.5.1', 'env': {'AR': '/usr/bin/gcc-ar', 'BASH_FUNC_which%%': '() ...sh-protection -fcf-protection -fdata-sections -ffunction-sections -flto=auto -flto-partition=none', ...}, 'stderr': -2}

    def check_output(*popenargs, timeout=None, **kwargs):
        r"""Run command with arguments and return its output.

        If the exit code was non-zero it raises a CalledProcessError.  The
        CalledProcessError object will have the return code in the returncode
        attribute and output in the output attribute.

        The arguments are the same as for the Popen constructor.  Example:

        >>> check_output(["ls", "-l", "/dev/null"])
        b'crw-rw-rw- 1 root root 1, 3 Oct 18  2007 /dev/null\n'

        The stdout argument is not allowed as it is used internally.
        To capture standard error in the result, use stderr=STDOUT.

        >>> check_output(["/bin/sh", "-c",
        ...               "ls -l non_existent_file ; exit 0"],
        ...              stderr=STDOUT)
        b'ls: non_existent_file: No such file or directory\n'

        There is an additional optional argument, "input", allowing you to
        pass a string to the subprocess's stdin.  If you use this argument
        you may not also use the Popen constructor's "stdin" argument, as
        it too will be used internally.  Example:

        >>> check_output(["sed", "-e", "s/foo/bar/"],
        ...              input=b"when in the course of fooman events\n")
        b'when in the course of barman events\n'

        By default, all communication is in bytes, and therefore any "input"
        should be bytes, and the return value will be bytes.  If in text mode,
        any "input" should be a string, and the return value will be a string
        decoded according to locale encoding, or by "encoding" if set. Text mode
        is triggered by setting any of text, encoding, errors or universal_newlines.
        """
        if 'stdout' in kwargs:
            raise ValueError('stdout argument not allowed, it will be overridden.')

        if 'input' in kwargs and kwargs['input'] is None:
            # Explicitly passing input=None was previously equivalent to passing an
            # empty string. That is maintained here for backwards compatibility.
            if kwargs.get('universal_newlines') or kwargs.get('text'):
                empty = ''
            else:
                empty = b''
            kwargs['input'] = empty

>       return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
                   **kwargs).stdout

/usr/lib64/python3.8/subprocess.py:415:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

input = None, capture_output = False, timeout = None, check = True
popenargs = (['/usr/bin/python3', '/usr/lib/python3.8/site-packages/pep517/in_process/_in_process.py', 'get_requires_for_build_wheel', '/tmp/tmpdckmn8mo'],)
kwargs = {'cwd': '/home/tkloczko/rpmbuild/BUILD/nest_asyncio-1.5.1', 'env': {'AR': '/usr/bin/gcc-ar', 'BASH_FUNC_which%%': '() ...-fcf-protection -fdata-sections -ffunction-sections -flto=auto -flto-partition=none', ...}, 'stderr': -2, 'stdout': -1}
process = <subprocess.Popen object at 0x7f589eeb2a60>
stdout = b'Traceback (most recent call last):\n  File "/usr/lib/python3.8/site-packages/pep517/in_process/_in_process.py", line...ng pip, instead of https://github.com/user/proj/archive/master.zip use git+https://github.com/user/proj.git#egg=proj\n'
stderr = None, retcode = 1

    def run(*popenargs,
            input=None, capture_output=False, timeout=None, check=False, **kwargs):
        """Run command with arguments and return a CompletedProcess instance.

        The returned instance will have attributes args, returncode, stdout and
        stderr. By default, stdout and stderr are not captured, and those attributes
        will be None. Pass stdout=PIPE and/or stderr=PIPE in order to capture them.

        If check is True and the exit code was non-zero, it raises a
        CalledProcessError. The CalledProcessError object will have the return code
        in the returncode attribute, and output & stderr attributes if those streams
        were captured.

        If timeout is given, and the process takes too long, a TimeoutExpired
        exception will be raised.

        There is an optional argument "input", allowing you to
        pass bytes or a string to the subprocess's stdin.  If you use this argument
        you may not also use the Popen constructor's "stdin" argument, as
        it will be used internally.

        By default, all communication is in bytes, and therefore any "input" should
        be bytes, and the stdout and stderr will be bytes. If in text mode, any
        "input" should be a string, and stdout and stderr will be strings decoded
        according to locale encoding, or by "encoding" if set. Text mode is
        triggered by setting any of text, encoding, errors or universal_newlines.

        The other arguments are the same as for the Popen constructor.
        """
        if input is not None:
            if kwargs.get('stdin') is not None:
                raise ValueError('stdin and input arguments may not both be used.')
            kwargs['stdin'] = PIPE

        if capture_output:
            if kwargs.get('stdout') is not None or kwargs.get('stderr') is not None:
                raise ValueError('stdout and stderr arguments may not be used '
                                 'with capture_output.')
            kwargs['stdout'] = PIPE
            kwargs['stderr'] = PIPE

        with Popen(*popenargs, **kwargs) as process:
            try:
                stdout, stderr = process.communicate(input, timeout=timeout)
            except TimeoutExpired as exc:
                process.kill()
                if _mswindows:
                    # Windows accumulates the output in a single blocking
                    # read() call run on child threads, with the timeout
                    # being done in a join() on those threads.  communicate()
                    # _after_ kill() is required to collect that and add it
                    # to the exception.
                    exc.stdout, exc.stderr = process.communicate()
                else:
                    # POSIX _communicate already populated the output so
                    # far into the TimeoutExpired exception.
                    process.wait()
                raise
            except:  # Including KeyboardInterrupt, communicate handled that.
                process.kill()
                # We don't call process.wait() as .__exit__ does that for us.
                raise
            retcode = process.poll()
            if check and retcode:
>               raise CalledProcessError(retcode, process.args,
                                         output=stdout, stderr=stderr)
E               subprocess.CalledProcessError: Command '['/usr/bin/python3', '/usr/lib/python3.8/site-packages/pep517/in_process/_in_process.py', 'get_requires_for_build_wheel', '/tmp/tmpdckmn8mo']' returned non-zero exit status 1.

/usr/lib64/python3.8/subprocess.py:516: CalledProcessError
========================================================================= short test summary info ==========================================================================
FAILED ::project - subprocess.CalledProcessError: Command '['/usr/bin/python3', '/usr/lib/python3.8/site-packages/pep517/in_process/_in_process.py', 'get_requires_for_bu...
======================================================================= 1 failed, 6 passed in 4.32s ========================================================================

Upstream contribution

Thanks for this library, it really saves our lives.
I was thinking, since it is so useful, have you considered contributing to CPython to make this feature standard in asyncio? Or do you see issues that would prevent from doing it? Sorry if there are already discussions about it, but I couldn't find any.

Using with Quamash

I use this project a lot recently with Jupyter notebooks for testing async code, thanks! Hope itโ€™s OK to leave a question here?

In your readme it says this probably will not work with quamash events loops, I was wondering if you could let me know more about that, such as whatโ€™s the main difficult?

I am using Quamash for a desktop QT application and Iโ€™m coming up against the classic problem of interoperability with legacy code that I cannot change. I need to call an โ€˜async defโ€™ function and waiting for it to complete inside a โ€˜defโ€™ function.

Do you think nest_asyncio could help? Or do you think itโ€™s not worth it because it may introduce really hard to track down bugs?

Version 1.5.0 broke my pytest

since 2 hours my test stoped to pass


================================================= FAILURES =================================================
________________________________________________ test_init _________________________________________________

pyfuncitem = <Function test_init>

    @pytest.mark.tryfirst
    def pytest_pyfunc_call(pyfuncitem):
        funcargs = pyfuncitem.funcargs
        testargs = {arg: funcargs[arg] for arg in pyfuncitem._fixtureinfo.argnames}
    
        if not iscoroutinefunction(pyfuncitem.obj):
            pyfuncitem.obj(**testargs)
            return True
    
        try:
            loop = funcargs["io_loop"]
        except KeyError:
            loop = tornado.ioloop.IOLoop.current()
    
>       loop.run_sync(
            lambda: pyfuncitem.obj(**testargs), timeout=get_test_timeout(pyfuncitem)
        )

/usr/local/lib/python3.9/site-packages/pytest_tornasync/plugin.py:53: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
/usr/local/lib/python3.9/site-packages/tornado/ioloop.py:524: in run_sync
    self.start()
/usr/local/lib/python3.9/site-packages/tornado/platform/asyncio.py:199: in start
    self.asyncio_loop.run_forever()
/usr/local/lib/python3.9/site-packages/nest_asyncio.py:51: in run_forever
    with manage_run(self), manage_asyncgens(self):
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 

self = <contextlib._GeneratorContextManager object at 0x111a22760>

    def __enter__(self):
        # do not keep args and kwds alive unnecessarily
        # they are only needed for recreation, which is not possible anymore
        del self.args, self.kwds, self.func
        try:
>           return next(self.gen)
E           TypeError: 'NoneType' object is not an iterator

/usr/local/Cellar/[email protected]/3.9.1_6/Frameworks/Python.framework/Versions/3.9/lib/python3.9/contextlib.py:117: TypeError
========================================= short test summary info ==========================================
FAILED tests/test_jobs.py::test_init - TypeError: 'NoneType' object is not an iterator
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! stopping after 1 failures !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
============================================ 1 failed in 0.51s =============================================

You can see more in our CI if needed.
https://github.com/jupyter-naas/naas/runs/1777653142?check_suite_focus=true

Revert to 1.4.3 fix the issue you can check it there :

https://github.com/jupyter-naas/naas/runs/1777902486?check_suite_focus=true

Create new event loop if there isn't one running

On a similar note to #52, I was wondering if it were possible/desirable to have the patched run method fall back to _run_orig if there is no event loop running on the current thread it is executed on. This would allow to create a new event loop in that case rather than throwing an exception telling me that there is currently no event loop running on that thread.

Some background on the situation in which I wished for this behavior:
I'm currently trying to contribute to the jupyter-lsp project by implementing TCP support. However, it was desired to implement this functionality using Anyio rather than Asyncio. Now, within a synchronous function (which gets called on a thread where there is already an event loop running) I'd like to spawn a new thread running a new event loop in the background by creating a Thread(target=anyio.run, kwargs={'func': <my coro to run in parallel>}). But if I start that Thread, anyio.run will call asyncio.run which at that moment has already been patched somewhere in the jupyter code. The patched version then fails with the same exception as was already reported in #52

RuntimeError: There is no current event loop in thread 'Thread-2'.

due to there not being a running event loop on the newly created thread. Moreover, through the interface provided by anyio I am not able to create a new event loop manually (at least not without explicitly using asyncio code) in the new thread.

So I thought maybe it would be more intuitive, if nest_asyncio would merely add functionality to the existing asyncio.run method by allowing it to be called in a context where there is already an event loop running and, at the same time, would still keep the existing functionality of creating an event loop if there isn't one running yet. Maybe I'm just missing an important aspect, why this would be a bad idea?

I would, of course, be more than happy to provide an MR if you also would consider this a good idea.

nest_asyncio silently breaks async loop exception handling?

The ability to set an asyncio loop exception handler to catch exceptions in tasks appears to be broken when using nest_asyncio. I thought at first this might just be for nested calls, but it happens even in simple un-nested cases where nest_asyncio.apply() has been called. See below for an example. With nest_asyncio.apply, the loop continues running indefinitely with no indication that an exception occurred inside a task; without this line the exception handler is called as expected and terminates the loop.

import asyncio
import nest_asyncio
nest_asyncio.apply()  # comment this out to see "correct" behavior


async def raise_exception():
    await asyncio.sleep(0.5)
    print('About to raise error')
    raise RuntimeError("test")


async def main():
    asyncio.create_task(raise_exception())
    while True:
        await asyncio.sleep(0.3)
        print("Loop still running")


def handleException(loop: asyncio.AbstractEventLoop, context: dict):
    print('Unhandled exception in async task, will stop loop. Exception context:\n%s' % (context,))
    loop.stop()


loop = asyncio.get_event_loop()
loop.set_exception_handler(handleException)
try:
    loop.run_until_complete(main())
except Exception as e:
    print('Caught exception from run')
print('Run finished')

Output with nest_asyncio.apply():

Loop still running
About to raise error
Loop still running
Loop still running
Loop still running
...

Output without nest_asyncio.apply():

Loop still running
About to raise error
Unhandled exception in async task, will stop loop. Exception context:
{'message': 'Task exception was never retrieved', 'exception': RuntimeError('test'), 'future': <Task finished name='Task-2' coro=<raise_exception() done, defined at ...:7> exception=RuntimeError('test')>}
Caught exception from run
Run finished

Similarly, if no exception handler is set: without .apply() a Task exception was never retrieved message is printed to stderr, but with .apply() no such message is printed.

Is there a way to restore typical exception handling? The current behavior makes debugging more complex nested calls with exceptions rather difficult.

I've tested this with identical results on Python 3.8.10 on both Windows and Linux.

tasks are not cancelled after ctrl-c

This seems related enough to #57 to be a duplicate, but it also seems sufficiently different (i.e., the CancelledError is not raised) to merit its own issue.

What I'd like is for unfinished tasked to be cancelled when, e.g., a ctrl-c happens.

Here's a minimal example:

import asyncio
import nest_asyncio

if not True:
    print("applying nest_asyncio")
    nest_asyncio.apply()


async def get_results():
    print("start")
    try:
        await asyncio.sleep(5)
    except asyncio.CancelledError:
        print("cancelled!")
    print("stop")
    return 123


try:
    res = asyncio.run(get_results())
    print("res", res)
except KeyboardInterrupt:
    print("received ctrl-c")
    raise

results in

$ python cancel2.py
applying nest_asyncio
start
^Creceived ctrl-c
Traceback (most recent call last):
  File "cancel2.py", line 20, in <module>
    res = asyncio.run(get_results())
  File "/mnt/c/Users/me/Documents/nest_asyncio/nest_asyncio.py", line 32, in run
    return loop.run_until_complete(future)
  File "/mnt/c/Users/me/Documents/nest_asyncio/nest_asyncio.py", line 64, in run_until_complete
    self._run_once()
  File "/mnt/c/Users/me/Documents/nest_asyncio/nest_asyncio.py", line 87, in _run_once
    event_list = self._selector.select(timeout)
  File "/home/me/miniconda3/envs/aq/lib/python3.8/selectors.py", line 468, in select
    fd_event_list = self._selector.poll(timeout, max_ev)
KeyboardInterrupt

(the CancelledError is unexpectedly not raised), vs the desired:

$ python cancel2.py
start
^Ccancelled!
stop
received ctrl-c
Traceback (most recent call last):
  File "cancel2.py", line 20, in <module>
    res = asyncio.run(get_results())
  File "/home/me/miniconda3/envs/aq/lib/python3.8/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/home/me/miniconda3/envs/aq/lib/python3.8/asyncio/base_events.py", line 603, in run_until_complete
    self.run_forever()
  File "/home/me/miniconda3/envs/aq/lib/python3.8/asyncio/base_events.py", line 570, in run_forever
    self._run_once()
  File "/home/me/miniconda3/envs/aq/lib/python3.8/asyncio/base_events.py", line 1823, in _run_once
    event_list = self._selector.select(timeout)
  File "/home/me/miniconda3/envs/aq/lib/python3.8/selectors.py", line 468, in select
    fd_event_list = self._selector.poll(timeout, max_ev)
KeyboardInterrupt

I'm using the latest nest_asyncio installed from source.

$ pip list | grep nest_
nest-asyncio                  1.5.1               /mnt/c/Users/me/Documents/nest_asyncio

Two or more calls to run_until_complete causes run_until_complete to not exit.

In the following, I call n function that re-enter run_until_complete.
When n=1, the program exits. When n=2, the program never exits. Is it expected?

import asyncio
import functools
import time

import nest_asyncio
nest_asyncio.apply()

def sync(corot):
    """
    Make a synchronous function from an asynchronous one.
    :param corot:
    :return:
    """
    result, = asyncio.get_event_loop().run_until_complete(asyncio.gather(corot))
    return result

async def sync_to_corountine(func, *args, **kw):
    """
    Make a coroutine from a synchronous function.
    """
    try:
        return func(*args, *kw)
    finally:
        # every async needs an await.
        await asyncio.sleep(0)

def main():
    async def background(timeout):
        await asyncio.sleep(timeout)
        print(f"Background: {timeout}")

    loop = asyncio.get_event_loop()
    # Run some bacground work to check we are never blocked
    bg_tasks = [
        loop.create_task(background(i))
        for i in range(10)
    ]



    async def long_running_async_task(result):
        # Simulate slow IO
        print(f"...START long_running_async_task [{result}]")
        await asyncio.sleep(4)
        print(f"...END   long_running_async_task [{result}]")
        return result

    def sync_function_with_async_dependency(result):
        print(f"...START sync_function_with_async_dependency [{result}]")
        result = sync(long_running_async_task(result))
        print(f"...END   sync_function_with_async_dependency [{result}]")
        return result

    # Call sync_function_with_async_dependency
    # One reentrant task is OK
    # Multiple reentrant tasks->fails to exit
    n = 2
    for i in range(n):
       bg_tasks.append(sync_to_corountine(sync_function_with_async_dependency, i))

    # OK
    # bg_tasks.append(long_running_async_task(123))
    # bg_tasks.append(long_running_async_task(456))

    task = asyncio.gather(*bg_tasks)  # , loop=loop)
    loop.run_until_complete(task)

if __name__ == '__main__':
    main()

Exception in callback Task.__wakeup(<Future finished result=None>)

Exception in callback Task.__wakeup()
handle: <Handle Task.__wakeup()>
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/nest_asyncio.py", line 150, in run
ctx.run(self._callback, *self._args)
File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py", line 329, in __wakeup
self.__step()
File "/usr/local/lib/python3.7/site-packages/nest_asyncio.py", line 108, in step
step_orig(task, exc)
File "/usr/local/Cellar/python/3.7.7/Frameworks/Python.framework/Versions/3.7/lib/python3.7/asyncio/tasks.py", line 313, in __step
_leave_task(self._loop, self)
File "/usr/local/lib/python3.7/site-packages/nest_asyncio.py", line 122, in leave_task
del curr_tasks[loop]
KeyError: <_UnixSelectorEventLoop running=True closed=False debug=False>

Working with tornado

I ran into issues combining this package with tornado:

  File "/Users/maartenbreddels/miniconda3/envs/vaex37-test/lib/python3.7/site-packages/tornado/gen.py", line 712, in __init__
    if self.handle_yield(first_yielded):
  File "/Users/maartenbreddels/miniconda3/envs/vaex37-test/lib/python3.7/site-packages/tornado/gen.py", line 789, in handle_yield
    self.io_loop.add_future(self.future, inner)
  File "/Users/maartenbreddels/miniconda3/envs/vaex37-test/lib/python3.7/site-packages/tornado/ioloop.py", line 693, in add_future
    assert is_future(future)
AssertionError

The problem is tornado keeps a tuple of types that it considers futures. I have this workaround:
https://github.com/vaexio/vaex/blob/293d510ef5c8b96ce75224e909a005599c96cc92/packages/vaex-core/vaex/asyncio.py#L5

But maybe you have a better idea for this, or maybe tornado is to blame for this?
This line is the issue: https://github.com/tornadoweb/tornado/blob/712d61079defdad23b0a5e9fe0090b54e55cf7d0/tornado/concurrent.py#L49

Is there a pattern to switch asyncio projects to nest_asyncio?

I'm trying to port a Flask webservice to an async one (e.g. quart).
I've run into the issue of async creep. I don't see how to call async IO function from sync web-handler functions, called by async webservers (detail https://stackoverflow.com/questions/59738489/why-isnt-run-until-complete-re-entrant-how-to-incrementally-port-to-async-wi).

It occurred to me that a re-entrant async event loops are all that is required to allow sync-land and async-land to live in harmony, but GvR had a religious objections to it: https://bugs.python.org/issue22239 , with the consequence that sync code cannot efficiently call sync code without adding threads. And so I am here -- can nest_asyncio be used use none-nest_asyncio projects and allow the event loop to become reentrant?

For example, let's say I use the async Flask replacement Quart, its there a general pattern to safely switch to quart to use nest_asyncio without refactoring quart? (e.g. by replacing the event-loop hook?)

Websocket hangs with Windows and Python 3.8

Connection via websockets hangs when running on Windows with Python 3.8, nest_asyncio, and the default WindowsProactorEventLoopPolicy. A simple program can reproduce this:

import asyncio
import nest_asyncio
from websockets import connect

nest_asyncio.apply()

async def _connect():
    websocket = await connect('wss://wss.foo.com:9876')

def client():
    loop = asyncio.get_event_loop()
    return loop.run_until_complete(_connect())

if __name__ == "__main__":
    client()

This works on other platforms/Python versions/policy, or without nest_asyncio. I did a little bit of tracing, and it looks like the connect() completed successfully, but it somehow got stuck when selecting the next event.

Fail to wait for user action

I have a simple GUI in a Jupyter Notebook with two widgets: a drop down menu with 3 choices and a push button.
The desired behavior is:

  1. Execute the cell that contains the widgets (see code below)
  2. Wait (as long as needed) for the user to click on the push button
  3. Print the value of the drop down menu (the user's choice)

Using asyncio out of the box does not work in Jupyter Notebook because there is already an event loop running. I tried to user nest_asyncio to solve that issue, and the RunTimeError did go away. But instead, the notebook hangs forever. When I click on the push button, nothing happens.
I saw there were issues with asyncio.Future in #22, so maybe I am missing something. Any idea how I can leverage nest_asyncio to make this work? Thanks

import nest_asyncio
nest_asyncio.apply()
from ipywidgets import Button, Dropdown, HBox
import asyncio

choices = Dropdown(options=[1, 2, 3], description='Choice')
button = Button(description="Select")
panel = HBox([choices, button])

def wait_for_change(push_button, dropdown):
    future = asyncio.Future()  # Make getvalue function awaitable
    def getvalue(change):
        future.set_result(dropdown.value)
        push_button.on_click(getvalue, remove=True)  # Unbind function from widget
    push_button.on_click(getvalue)
    return future

async def main():
    user_choice = await wait_for_change(button, choices)
    print(f"User selected option {user_choice}")
    return user_choice

display(panel)
print("Which option would you like to choose?")
out = asyncio.run(main())
print("End of code")

Can't patch loop of type <class 'NoneType'>

After updating to nest_asyncio==1.5.2, it seems that our test suite has started to fail with the following error (the nest_asyncio==1.5.1 release still seems to work fine though):

runtimes/alibi-explain/tests/conftest.py:29: in <module>
    nest_asyncio.apply()
../../.virtualenvs/mlserver-37/lib/python3.7/site-packages/nest_asyncio.py:14: in apply
    raise ValueError('Can\'t patch loop of type %s' % type(loop))
E   ValueError: Can't patch loop of type <class 'NoneType'>

Could this be due to calling nest_asyncio.apply() before an async loop has been created?

Python 3.8.2 breaks nest-asyncio

This trivial demo works with CPython 3.8.1 and breaks with 3.8.2:

import asyncio
import nest_asyncio
nest_asyncio.apply()


async def f():
    print("OK")


async def g():
    loop = asyncio.get_event_loop()
    loop.run_until_complete(f())

if __name__ == "__main__":
    asyncio.run(g())

Works

  • Linux x64, anaconda, nest_asyncio 1.2.2
  • python=3.8.1=h357f687_2 (conda-forge)

Output: OK

Broken

  • Linux x64, anaconda, nest_asyncio 1.2.2
  • python=3.8.2=h9d8adfe_1_cpython (conda-forge)

Output: RuntimeError: This event loop is already running

nest-asyncio fail to reset ContextVar

nest-asyncio breaks ContextVar.reset method.

I think the problem is at these lines as it creates a separate context for each call to coroutine:

ctx = self._context.copy()
ctx.run(self._callback, *self._args)
if ctx:
self._context.run(update_from_context, ctx)

When I have removed Handle patching, this issue disappeared.

Code to reproduce issue:

from asyncio import run, sleep
from contextvars import ContextVar

from nest_asyncio import apply

ctx_var: ContextVar[int] = ContextVar('ctx_var')


async def main() -> None:
    token = ctx_var.set(1)
    await sleep(1)
    ctx_var.reset(token)


if __name__ == '__main__':
    apply()
    run(main())

Will produce such traceback:

Traceback (most recent call last):
  File "/Users/yuriikarabas/PycharmProjects/sandbox/main.py", line 17, in <module>
    run(main())
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/Users/yuriikarabas/PycharmProjects/sandbox/venv/lib/python3.9/site-packages/nest_asyncio.py", line 98, in run_until_complete
    return f.result()
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/futures.py", line 201, in result
    raise self._exception
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/asyncio/tasks.py", line 256, in __step
    result = coro.send(None)
  File "/Users/yuriikarabas/PycharmProjects/sandbox/toy_io.py", line 12, in main
    ctx_var.reset(token)
ValueError: <Token var=<ContextVar name='ctx_var' at 0x7fbde8933ea0> at 0x7fbde8ed4ac0> was created in a different Context

RuntimeError: There is no current event loop in thread "worker 0"

Hi,

Thank you, @erdewit, for building this library.

When using the nest_asyncio.apply() patch with the Bottle web framework and running the paste server I encounter the following error: RuntimeError: There is no current event loop in thread 'worker 0'

Here is the complete code to reproduce issue:

from bottle import route, run, template
import nest_asyncio

@route('/')
def index(name='jack'):
    nest_asyncio.apply()
    return template('<b>Hello {{name}}</b>!', name=name)

run(server='paste', host='localhost', port=8080, debug=True)

If you don't have these libraries installed then run:
pip install bottle
pip install nest-asyncio
pip install paste


Here is the complete exception:

Error: 500 Internal Server Error
Sorry, the requested URL 'http://localhost:8080/' caused an error:

Internal Server Error
Exception: RuntimeError("There is no current event loop in thread 'worker 0'.")

Traceback:
Traceback (most recent call last):
  File "c:\Users\env\lib\site-packages\bottle.py", line 868, in _handle
    return route.call(**args)
  File "c:\Users\env\lib\site-packages\bottle.py", line 1748, in wrapper
    rv = callback(*a, **ka)
  File "c:\Users\mybottle.py", line 5, in index
    nest_asyncio.apply()
  File "c:\Users\env\lib\site-packages\nest_asyncio.py", line 12, in apply
    loop = loop or asyncio.get_event_loop()
  File "C:\Users\AppData\Local\Programs\Python\Python39\lib\asyncio\events.py", line 642, in get_event_loop
    raise RuntimeError('There is no current event loop in thread %r.'
RuntimeError: There is no current event loop in thread 'worker 0'.

This threading.current_thread() is threading.main_thread() here cpython/Lib/asyncio/events.py is evaluating to False which is causing the exception to be raised.
We can update .apply() to set a new event loop for this thread if it is missing?

loop = loop or asyncio.get_event_loop()

nest_asyncio broke something in nbclient

my unit tests are now giving the exception:

_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
/home/amos/miniconda3/envs/aq/lib/python3.8/site-packages/testbook/testbook.py:52: in __enter__
    self._prepare()
/home/amos/miniconda3/envs/aq/lib/python3.8/site-packages/testbook/testbook.py:46: in _prepare
    self.client.execute()
/home/amos/miniconda3/envs/aq/lib/python3.8/site-packages/testbook/client.py:147: in execute
    super().execute_cell(cell, index)
/home/amos/miniconda3/envs/aq/lib/python3.8/site-packages/nbclient/util.py:74: in wrapped
    return just_run(coro(*args, **kwargs))
/home/amos/miniconda3/envs/aq/lib/python3.8/site-packages/nbclient/util.py:53: in just_run
    return loop.run_until_complete(coro)
/home/amos/miniconda3/envs/aq/lib/python3.8/asyncio/base_events.py:616: in run_until_complete
    return future.result()
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <testbook.client.TestbookNotebookClient object at 0x7fc0bcd87670>
cell = {'cell_type': 'code', 'execution_count': None, 'id': '6765142a', 'metadata': {'execution': {}}, 'outputs': [], '<snip>
cell_index = 6, execution_count = None, store_history = True

    async def async_execute_cell(
            self,
            cell: NotebookNode,
            cell_index: int,
            execution_count: t.Optional[int] = None,
            store_history: bool = True) -> NotebookNode:
        """
        Executes a single code cell.

        To execute all cells see :meth:`execute`.

        Parameters
        ----------
        cell : nbformat.NotebookNode
            The cell which is currently being processed.
        cell_index : int
            The position of the cell within the notebook object.
        execution_count : int
            The execution count to be assigned to the cell (default: Use kernel response)
        store_history : bool
            Determines if history should be stored in the kernel (default: False).
            Specific to ipython kernels, which can store command histories.

        Returns
        -------
        output : dict
            The execution output payload (or None for no output).

        Raises
        ------
        CellExecutionError
            If execution failed and should raise an exception, this will be raised
            with defaults about the failure.

        Returns
        -------
        cell : NotebookNode
            The cell which was just processed.
        """
        assert self.kc is not None
        if cell.cell_type != 'code' or not cell.source.strip():
            self.log.debug("Skipping non-executing cell %s", cell_index)
            return cell

        if self.record_timing and 'execution' not in cell['metadata']:
            cell['metadata']['execution'] = {}

        self.log.debug("Executing cell:\n%s", cell.source)

        cell_allows_errors = (not self.force_raise_errors) and (
            self.allow_errors
            or "raises-exception" in cell.metadata.get("tags", []))

        parent_msg_id = await ensure_async(
            self.kc.execute(
                cell.source,
                store_history=store_history,
                stop_on_error=not cell_allows_errors
            )
        )
        # We launched a code cell to execute
        self.code_cells_executed += 1
        exec_timeout = self._get_timeout(cell)

        cell.outputs = []
        self.clear_before_next_output = False

        task_poll_kernel_alive = asyncio.ensure_future(
            self._async_poll_kernel_alive()
        )
        task_poll_output_msg = asyncio.ensure_future(
            self._async_poll_output_msg(parent_msg_id, cell, cell_index)
        )
        self.task_poll_for_reply = asyncio.ensure_future(
            self._async_poll_for_reply(
                parent_msg_id, cell, exec_timeout, task_poll_output_msg, task_poll_kernel_alive
            )
        )
        try:
            exec_reply = await self.task_poll_for_reply
        except asyncio.CancelledError:
            # can only be cancelled by task_poll_kernel_alive when the kernel is dead
            task_poll_output_msg.cancel()
>           raise DeadKernelError("Kernel died")
E           nbclient.exceptions.DeadKernelError: Kernel died

/home/amos/miniconda3/envs/aq/lib/python3.8/site-packages/nbclient/client.py:845: DeadKernelError

which goes away if I bring nest_asyncio back to 1.5.1

Exceptions support

I am having an issue with this library in that an exception thrown in a coroutine can cause stalls. Here is a very simple sample code:

import asyncio
import nest_asyncio


async def nested_function():
   print("Nested function called")
   await asyncio.sleep(1)

async def crashing_function():
   print("Launching crashing function")
   await asyncio.sleep(0.01)
   raise RuntimeError("This thing crashed")

def recursive_function():
   print("Initial")
   loop = asyncio.get_running_loop()
   loop.run_until_complete(nested_function())

async def calls_recursive():
   await asyncio.sleep(0.01)
   recursive_function()

async def async_main():
   print("Async main")
   nest_asyncio.apply(loop)
   tasks = [calls_recursive(), crashing_function()]
   print("Async main resuming")
   await asyncio.gather(*tasks)

loop = asyncio.get_event_loop()
loop.run_until_complete(async_main())

What I would expect this code to do is that the nested run_until_complete to finish the nested function and that the excpetion thrown in crashing_function() to be report in the await asyncio.gather() call. Instead, the nested run_until_complete call just hangs forever

get_asyncgen_hooks not available in python 3.5

Hi,

We've been using nest_asyncio for a while now (thanks!). Just yesterday our test failed with

  File "/home/travis/virtualenv/python3.5.6/lib/python3.5/site-packages/nest_asyncio.py", line 62, in run_forever
    old_agen_hooks = sys.get_asyncgen_hooks()
AttributeError: module 'sys' has no attribute 'get_asyncgen_hooks'

It looks like sys.get_asyncgen_hooks wasn't introduced until Python 3.6, and this package is supposed to support Python 3.5+.

Thanks,
Jessie

Not compatible with Python 3.6

Hi, just letting you know, that commit 3a2c8a2 broke Python 3.6 (and probably 3.5) compatibility, because run_forever is no longer patched.

1.2.3 seems to work perfectly for this version though, so I don't think a fix is neccesary, you might want to update the README though.

Thanks for this package!

RuntimeError: Cannot close a running event loop

I just installed nest_asyncio in my Ipython environment (using Jupyter Notebook) to get a basic example of an asynchronous function running.

import asyncio
import datetime
import nest_asyncio
nest_asyncio.apply()

async def display_date(loop):
    end_time = loop.time() + 5.0
    while True:
        print(datetime.datetime.now())
        if (loop.time() + 1.0) >= end_time:
            break
        await asyncio.sleep(1)


loop = asyncio.get_event_loop()
# Blocking call which returns when the display_date() coroutine is done
loop.run_until_complete(display_date(loop))
loop.close()

It works but how should I now close this function after execution?
Due to Ipython's main async loop, If I try to use loop.close() it doesn't work and throws an error:

---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
<ipython-input-9-c73182512528> in <module>
     16 # Blocking call which returns when the display_date() coroutine is done
     17 loop.run_until_complete(display_date(loop))
---> 18 loop.close()

~\Anaconda3\lib\asyncio\selector_events.py in close(self)
     81     def close(self):
     82         if self.is_running():
---> 83             raise RuntimeError("Cannot close a running event loop")
     84         if self.is_closed():
     85             return

RuntimeError: Cannot close a running event loop

Tested on Windows 10, Python 3.7.5 and 3.9.1.

Nest asyncio breaks JupyterLab (tornado)

[E 09:55:15.053 LabApp] Uncaught exception GET /nbconvert/webpdf/basics.ipynb?download=true (127.0.0.1)
    HTTPServerRequest(protocol='http', host='localhost:8888', method='GET', uri='/nbconvert/webpdf/basics.ipynb?download=true', version='HTTP/1.1', remote_ip='127.0.0.1')
    Traceback (most recent call last):
      File "/home/sylvain/miniconda3/lib/python3.7/site-packages/tornado/web.py", line 1703, in _execute
        result = await result
      File "/home/sylvain/miniconda3/lib/python3.7/site-packages/tornado/gen.py", line 748, in run
        yielded = self.gen.send(value)
      File "/home/sylvain/miniconda3/lib/python3.7/site-packages/notebook/nbconvert/handlers.py", line 152, in get
        self.finish(output)
      File "/home/sylvain/miniconda3/lib/python3.7/site-packages/tornado/web.py", line 1126, in finish
        self.write(chunk)
      File "/home/sylvain/miniconda3/lib/python3.7/site-packages/tornado/web.py", line 834, in write
        raise TypeError(message)
    TypeError: write() only accepts bytes, unicode, and dict objects

Running this with serial_asyncio fails to close the COM ports

The first time the following serial application runs, it succeeds. However, the second time it is run it fails:

SerialException: could not open port 'com3': PermissionError(13, 'Access is denied.', None, 5)

import asyncio
import serial_asyncio
import nest_asyncio

async def main(loop):
reader, _ = await serial_asyncio.open_serial_connection(url='com3', baudrate=115200)
print('Reader created')
_, writer = await serial_asyncio.open_serial_connection(url='com4', baudrate=115200)
print('Writer created')
messages = [b'foo\n', b'bar\n', b'baz\n', b'qux\n']
sent = send(writer, messages)
received = recv(reader)

await asyncio.wait([sent, received])


print('finished! main')

async def send(w, msgs):
for msg in msgs:
w.write(msg)
print(f'sent: {msg.decode().rstrip()}')
await asyncio.sleep(0.5)
w.write(b'DONE\n')
print('Done sending')

async def recv(r):
while True:
msg = await r.readuntil(b'\n')
if msg.rstrip() == b'DONE':
print('Done receiving')
break
print(f'received: {msg.rstrip().decode()}')

nest_asyncio.apply()

loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
print('finished! loop')

Include LICENSE in released package

First of all, thanks for making nest_asyncio available - it covers a very handy use case! I was wondering if it would be possible to make a new release that includes the LICENSE file along with the rest of the files, in order to facilitate proper attribution.

Module instead of package

I see there is only one file on the package. I think you should make an installable module out of it :)

nest_asyncio seems to reset contextvars

With nest_asyncio, contextvars seems to be reset after creating and running new Task.
Therefore the test below fails.

Disabling next_asyncio, the test is successful.

import asyncio
import contextvars

import nest_asyncio
import pytest

# commenting out this line, the test suceeds
nest_asyncio.apply()

number_cv: contextvars.ContextVar[int] = contextvars.ContextVar('number_cv')


class TestContextVars:

    @pytest.mark.asyncio
    async def test_context_var(self):
        number_cv.set(1)

        await self.child_method()

        n = number_cv.get(-1)
        print(f'parent after child_method: number={n}')  # ok "parent after child_method: number=2"

        await asyncio.create_task(self.child_task())

        n = number_cv.get(-1)
        print(f'parent after child_task: number={n}')  # "parent after child_task: number=-1"
        assert n == 2

    async def child_method(self):
        n = number_cv.get(-1)
        print(f'in child_method: number={n}')  # ok "in child_method: number=1"
        number_cv.set(2)

    async def child_task(self):
        n = number_cv.get(-1)
        print(f'in child_task: number={n}')  # ok "in child_task: number=2"
        number_cv.set(3)

nest asyncio causing deadlock?

I have the following piece of code that uses python-can and aioisotp (tried to simplify it). Although this particular code has no need for nest_asyncio my larger code uses it and so I need this to work with nest_asyncio.

import nest_asyncio
nest_asyncio.apply() 

import time
import binascii
import can
from can.interfaces.vector import VectorBus
import asyncio
import aioisotp

async def infinite(rdr):
    while True:
        payload = await rdr.read(4095)
        print("inf_rdr:", binascii.hexlify(payload).decode())
        
async def main(): 
    network = aioisotp.ISOTPNetwork(0,
                                    interface='vector',
                                    receive_own_messages=False, tx_padding=0x0)
    with network.open():
        reader, writer = await network.open_connection(0x7BB, 0x7B3)
        reader1, writer1 = await network.open_connection(0x728, 0x720)
        reader2, writer2 = await network.open_connection(0x7DF, 0x7DF)
        asyncio.create_task(infinite(reader))
        asyncio.create_task(infinite(reader1))

        writer2.write(b'\x22\xF1\x13')

        await asyncio.sleep(6.5) # The read() in infinite() does not complete until this times out
    await asyncio.sleep(0.5)
    
if __name__ == "__main__":
    asyncio.run(main())

What I find, is the write() occurs quickly as expected. However, (per the comment in the code) the read() in infinite() seems to be blocked until await asyncio.sleep(6.5) times out - so the print() occurs after 6.5s. Whatever time I change that sleep() to, that's how long before the print() is seen. If I remove the nest_asyncio, it works as expected - i.e. the read() completes as soon as the data arrives (milliseconds).

Perhaps I am misunderstanding what nest_asyncio does but it seems with it, it 'blocks' the read?

I have successfully used nest_asyncio in other code but I can't see if I am using it incorrectly here. Any ideas if this is my code/understanding or a bug?

(Python 3.8)

Re-entrant call waits until extra events are processed, could return sooner

loop.run_until_complete(task1)
 ...  loop.run_until_complete(task11)
loop.run_until_complete(task2)
 ...   loop.run_until_complete(task12)
loop.run_until_complete(task3)
 ...   loop.run_until_complete(task13)
loop.run_until_complete(task4)
 ...   loop.run_until_complete(task14)

# None of the nested tasks complete until the slowest (last) nested task completes

In the example below have a nested call asking to run a single task on loop.
loop has other tasks queue or in execution.
Some of the nested tasks task two seconds, some take one.
Ideally, 1 seconds tasks would complete after 1 second.

It seems the duration of:

loop.run_until_complete(nested_task)

depends heavily on what is in the queue, not how much time it spent blocking. It would be much better if it returned immediately after executing nested_task.

The program otherwise looks fully asynchronous.

import asyncio
import time
from random import randint
import nest_asyncio
nest_asyncio.apply()

def sync2async(f, *args, **kw):
    async def _f():
        return f(*args, **kw)
    return _f()

def sync(f, loop, args=(), kw={}):
    """
    Execute a non-async function in the event loop
    Generally, the function should not be IO bound unless
    it also submits the IO task to the loop.
    """
    return loop.run_until_complete(sync2async(f, *args, **kw))


async def async_db_execution(query_param, sleep=1):
    await asyncio.sleep(sleep)
    dbresult = randint(0,100)
    return query_param

def original_sync_implementation(query_param, sleep):
    # .... non-blocking sync stuff

    # async call to DB -- this thread does not block!
    # But, whomp, whomp, blocks until ALL tasks in the loop are complete,
    # not just the one submitted.
    tstart = time.time()
    result = loop.run_until_complete(async_db_execution(query_param=query_param, sleep=sleep))
    print("DB task complete should take {}, but took {}".format(sleep, time.time()-tstart))
    # Could also replace this with threaded execution!
    # loop.run_in_executor()

    # .... more non-blocking sync stuff
    return result

async def async_rest_endpoint(query_param=1, sleep=2):
    # Simulate slow IO-bound task
    # Has non-syncio markup implementation
    # but DB access is with asyncio
    print(f"...START long_running_async_task [{query_param} {sleep}]")
    result =sync(original_sync_implementation,
                 kw=dict(query_param=query_param, sleep=sleep),
                 loop=loop)
    print(f"...END   long_running_async_task [{query_param} {sleep}]")
    return result


loop = asyncio.get_event_loop()
task = asyncio.gather(*[
    loop.create_task(async_rest_endpoint(query_param=i, sleep=1+(i%2)))
    # loop.create_task(long_running_async_task())
    for i in range(10)
])
loop.run_until_complete(task)

print("DONE")
...START long_running_async_task [0 1]
...START long_running_async_task [1 2]
...START long_running_async_task [2 1]
...START long_running_async_task [3 2]
...START long_running_async_task [4 1]
...START long_running_async_task [5 2]
...START long_running_async_task [6 1]
...START long_running_async_task [7 2]
...START long_running_async_task [8 1]
...START long_running_async_task [9 2]
DB task complete after 2, should be 2.002359390258789
DB task complete after 1, should be 2.002471685409546
DB task complete after 2, should be 2.002546548843384
DB task complete after 1, should be 2.0026204586029053
DB task complete after 2, should be 2.0026872158050537
DB task complete after 1, should be 2.002741575241089
DB task complete after 2, should be 2.002797842025757
DB task complete after 1, should be 2.0028557777404785
DB task complete after 2, should be 2.0029075145721436
DB task complete after 1, should be 2.0029778480529785
...END   long_running_async_task [9 2]
...END   long_running_async_task [8 1]
...END   long_running_async_task [7 2]
...END   long_running_async_task [6 1]
...END   long_running_async_task [5 2]
...END   long_running_async_task [4 1]
...END   long_running_async_task [3 2]
...END   long_running_async_task [2 1]
...END   long_running_async_task [1 2]
...END   long_running_async_task [0 1]
DONE   # ~ 2 seconds

What I imagine is happening:

def run_all(loop)
      while True: 
           try: 
                t = loop.queue.next_signalled_unblocked()
                t.execute_until_yield_or_return()
                if t.complete: 
                     loop.queue.remove(t)
                     return t.result
                else: continue    
           except Empty:
                return

def run_until_complete(loop, task)
      queue.push(task)
      run_all(loop)
      return task.result

How it could be better:

def run_until_complete(loop, task)
      loop.queue.push(task)
      while True: 
           try: 
                t = loop.queue.next_signalled_unblocked()
                t.execute_until_yield_or_return()
                if t.complete: 
                     loop.queue.remove(t)
                    return t.result
                else: continue  # run something else
             
           except Empty:
                return

KeyError at del curr_tasks[loop] in leave_task

During high frequency calls in an implementation awaiting websockets.recv(), I occasionally receive:

Exception in callback Task.__wakeup(<Future finished result=None>)
handle: <Handle Task.__wakeup(<Future finished result=None>)>
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/dist-packages/nest_asyncio.py", line 150, in run
    ctx.run(self._callback, *self._args)
  File "/usr/lib/python3.7/asyncio/tasks.py", line 303, in __wakeup
    self.__step()
  File "/usr/local/lib/python3.7/dist-packages/nest_asyncio.py", line 108, in step
    step_orig(task, exc)
  File "/usr/lib/python3.7/asyncio/tasks.py", line 287, in __step
    _leave_task(self._loop, self)
  File "/usr/local/lib/python3.7/dist-packages/nest_asyncio.py", line 122, in leave_task
    del curr_tasks[loop]
KeyError: <_UnixSelectorEventLoop running=False closed=False debug=False>

Infinite recursion when cancelling a task

When cancelling a task, something about nest_asyncio is causing an infinite recursion:

      Traceback:
      File "/usr/local/lib/python3.8/dist-packages/nest_asyncio.py", line 190, in run
        ctx.run(self._callback, *self._args)
      File "/usr/local/lib/python3.8/dist-packages/nest_asyncio.py", line 148, in step
        step_orig(task, exc)
      File "/usr/lib/python3.8/asyncio/tasks.py", line 319, in __step
        if self._fut_waiter.cancel():
      File "/usr/lib/python3.8/asyncio/tasks.py", line 254, in cancel
        if self._fut_waiter.cancel():
      File "/usr/lib/python3.8/asyncio/tasks.py", line 707, in cancel
        if child.cancel():
      File "/usr/lib/python3.8/asyncio/tasks.py", line 254, in cancel
        if self._fut_waiter.cancel():
      File "/usr/lib/python3.8/asyncio/tasks.py", line 254, in cancel
        if self._fut_waiter.cancel():
      File "/usr/lib/python3.8/asyncio/tasks.py", line 707, in cancel
        if child.cancel():
      ...
      File "/usr/lib/python3.8/asyncio/tasks.py", line 254, in cancel
        if self._fut_waiter.cancel():
      File "/usr/lib/python3.8/asyncio/tasks.py", line 250, in cancel
        self._log_traceback = False
      File "/usr/lib/python3.8/asyncio/futures.py", line 112, in _log_traceback
        if bool(val):
    <class 'RecursionError'>
    maximum recursion depth exceeded while calling a Python object

Any idea what could be causing this?

RuntimeError: Cannot enter into task

Hi,

thanks for this amazing package, this was for me the reason to adopt asyncio (otherwise the maintenance burden was too high).

I sometimes see this error in CI systems, and have trouble reproducing it locally:

Traceback (most recent call last):
  File "/Users/maartenbreddels/miniconda3/envs/dev/lib/python3.7/site-packages/nest_asyncio.py", line 149, in run
    ctx.run(self._callback, *self._args)
RuntimeError: Cannot enter into task <Task pending coro=<_debounced_callable.__call__.<locals>.debounced_execute.<locals>.run_async() running at /Users/maartenbreddels/src/vaex/packages/vaex-core/vaex/jupyter/utils.py:149>> while another task <Task pending coro=<InteractiveShell.run_cell_async() running at /Users/maartenbreddels/miniconda3/envs/dev/lib/python3.7/site-packages/IPython/core/interactiveshell.py:3020> cb=[IPythonKernel._cancel_on_sigint.<locals>.cancel_unless_done(<Future pendi...ernel.py:230]>)() at /Users/maartenbreddels/miniconda3/envs/dev/lib/python3.7/site-packages/ipykernel/ipkernel.py:230, IOLoop.add_future.<locals>.<lambda>() at /Users/maartenbreddels/miniconda3/envs/dev/lib/python3.7/site-packages/tornado/ioloop.py:690]> is being executed.

I wonder if you have any idea what can cause this so I can try to reproduce it and open a proper issue (or fix it).

cheers,

Maarten

Python 3.10: test_two_run_until_completes_in_one_outer_loop test fails

======================================================================                
ERROR: test_two_run_until_completes_in_one_outer_loop (tests.nest_test.NestTest)
----------------------------------------------------------------------                
Traceback (most recent call last):                                                                                                                                           
  File "/tmp/portage/dev-python/nest_asyncio-1.5.1/work/nest_asyncio-1.5.1/tests/nest_test.py", line 87, in test_two_run_until_completes_in_one_outer_loop                   
    asyncio.gather(f1(), f2(), loop=self.loop))                                       
TypeError: gather() got an unexpected keyword argument 'loop'       

The loop parameter has been removed in py3.10.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.