Coder Social home page Coder Social logo

multiprocessing-logging's Introduction

multiprocessing-logging

Supported Python versions License

When using the multiprocessing module, logging becomes less useful since sub-processes should log to individual files/streams or there's the risk of records becoming garbled.

This simple module implements a Handler that when set on the root Logger will handle tunneling the records to the main process so that they are handled correctly.

It's currently tested in Linux and Python 2.7 & 3.6+.

Pypy3 hangs on the tests so I don't recommend using it.

Pypy appears to be working, recently.

Only works on POSIX systems and only Linux is supported. It does not work on Windows.

Origin

This library was taken verbatim from a StackOverflow post and extracted into a module so that I wouldn't have to copy the code in every project.

Later, several improvements have been contributed.

Usage

Before you start logging but after you configure the logging framework (maybe with logging.basicConfig(...)), do the following:

import multiprocessing_logging

multiprocessing_logging.install_mp_handler()

and that's it.

With multiprocessing.Pool

When using a Pool, make sure install_mp_handler is called before the Pool is instantiated, for example:

import logging
from multiprocessing import Pool
from multiprocessing_logging import install_mp_handler

logging.basicConfig(...)
install_mp_handler()
pool = Pool(...)

Problems

The approach of this module relies on fork being used to create new processes. This start method is basically unsafe when also using threads, as this module does.

The consequence is that there's a low probability of the application hanging when creating new processes.

As a palliative, don't continuously create new processes. Instead, create a Pool once and reuse it.

multiprocessing-logging's People

Contributors

bloomen avatar caunion avatar cjw296 avatar deajan avatar feldsam avatar johnthagen avatar jruere avatar larinam avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

multiprocessing-logging's Issues

exc_info missing on record

Hi @jruere,

I notice the library stringifies the exception info into exc_text, then clears the exc_info property.

This causes issues because the handler I'm using (AzureLogHandler) requires the exc_info property to determine the record was an exception. I understand why the exc_info property is cleared, but could it be possible to potentially parse the exc_text back into exc_info before submitting it to the sub_handler?

Thanks

Lost LogRecords

Hi,

I've come across a situation where logs are randomly lost and I cannot figure out why. Not sure if this is timing related (I added a sleep to see if it makes a difference...no).

In my project I normally do logger = logging.getLogger("someName") which I then use throughout the file. So I was playing with a small example when I noticed that logs go missing quite often. Sometimes just the first message prints, sometimes more, sometimes all. If I comment out the install_mp_handler(), it seems to work properly every time. My project is using multiprocessing, but this example is just a single file for testing.

My setup:
python3.6.2
MacOS Mojave 10.14

Here's the code and the output from a few consecutive runs:

import logging, time, multiprocessing_logging

logger = logging.getLogger("mymodule")

def logSomething():
    logger.debug("Debug message")
    logger.info("Info message")
    logger.warning("Warning message")
    logger.error("Error message")

if __name__=="__main__":
    logging.basicConfig(level=logging.DEBUG)
    multiprocessing_logging.install_mp_handler()
    time.sleep(2)  # to give the threads time to start properly
    logSomething()

Output:

~/dev/test > python3 testLogging.py 
DEBUG:mymodule:Debug message
INFO:mymodule:Info message
WARNING:mymodule:Warning message

~/dev/test > python3 testLogging.py 
DEBUG:mymodule:Debug message
INFO:mymodule:Info message

~/dev/test > python3 testLogging.py 
DEBUG:mymodule:Debug message
INFO:mymodule:Info message

~/dev/test > python3 testLogging.py 
DEBUG:mymodule:Debug message
INFO:mymodule:Info message

~/dev/test > python3 testLogging.py 
DEBUG:mymodule:Debug message
INFO:mymodule:Info message
WARNING:mymodule:Warning message
ERROR:mymodule:Error message

install_mp_handler() leaking semaphores?

In production we execute various periodic background jobs using the AP Scheduler package. The app runs in a docker container. We add periodic jobs to the scheduler at application startup by calling the scheduler's add_job() method:

self.scheduler.add_job(
    _execute,
    args=job_args,
    trigger="interval",
    max_instances=max_instances,
    # trigger arguments below
    seconds=interval.total_seconds(),
    start_date=start_time,
    name=name,
)

The _execute() function that we submit is our own wrapper around the actual callable that contains the job's core logic. Before invoking the latter, we initialize multiprocessing logging:

try:
  # The following literally just calls  multiprocessing_logging.install_mp_handler(), and that's it
   context.setup_multiprocess_logging()
except OSError as exc:
    logger.exception(exc)

    curr_proc = psutil.Process(os.getpid())
    open_files = curr_proc.num_fds()
    max_open_files = curr_proc.rlimit(psutil.RLIMIT_NOFILE)
    logger.error(
        (
            "Setting up multiprocess logging failed: %s"
            "Process using %d file handle(s), RLIMIT_NOFILE: %s"
        ),
        exc, open_files, max_open_files
    )
    raise

However, every now and then we observer that setting up multiprocessing logging fails. Unpredictable, when, but it eventually always occurs, and we need to restart the app:

Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/apscheduler/executors/base.py", line 125, in run_job
    retval = job.func(*job.args, **job.kwargs)
  File "/usr/src/oracle/dex_ohlcv/scheduler/scheduler.py", line 189, in _execute
    context.setup_multiprocess_logging()
  File "/usr/src/oracle/dex_ohlcv/processcontext.py", line 314, in setup_multiprocess_logging
    install_mp_handler()
  File "/usr/local/lib/python3.10/site-packages/multiprocessing_logging.py", line 27, in install_mp_handler
    handler = MultiProcessingHandler("mp-handler-{0}".format(i), sub_handler=orig_handler)
  File "/usr/local/lib/python3.10/site-packages/multiprocessing_logging.py", line 60, in __init__
    self.queue = multiprocessing.Queue(-1)
  File "/usr/local/lib/python3.10/multiprocessing/context.py", line 103, in Queue
    return Queue(maxsize, ctx=self.get_context())
  File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 48, in __init__
    self._wlock = ctx.Lock()
  File "/usr/local/lib/python3.10/multiprocessing/context.py", line 68, in Lock
    return Lock(ctx=self.get_context())
  File "/usr/local/lib/python3.10/multiprocessing/synchronize.py", line 162, in __init__
    SemLock.__init__(self, SEMAPHORE, 1, 1, ctx=ctx)
  File "/usr/local/lib/python3.10/multiprocessing/synchronize.py", line 57, in __init__
    sl = self._semlock = _multiprocessing.SemLock(
OSError: [Errno 28] No space left on device

There is plenty of free storage space on the server, thus "No space on device" likely means that we run out of semaphore locks, we suspect a resource leak somewhere.

In the exception handler we also log the number of open file descriptors, and while the number is high, it's still way below the system limit:

Setting up multiprocess logging failed: [Errno 28] No space left on deviceProcess using 1265 file handle(s), RLIMIT_NOFILE: (1048576, 1048576)

Questions:

  • Do we use install_mp_handler() correctly, i.e. every time the AP Scheduler starts executing a job in a new child process?
  • Are you aware of any potential semaphore leaks by using install_mp_handler() this way? Are we even exploring in the right direction?
  • Does logging the number of currently open file descriptors even make sense? Do those include the any semaphore locks?

TL; DR - the crashes always happen at the same place, i.e. when calling install_mp_handler(). We suspect a resource leak, but not sure if it's caused by install_mp_handler() internals, or by the way we use it.

We appreciate any tips or insights, and we can of course provide any additional information needed.
If this is definitely not an issue with install_mp_handler(), that's also OK, we can at least focus our search elsewhere.

Hangs when using even simple example

After having issues in a fairly complex program, I was able to cut it down to just this code:

#!/usr/bin/env python3

from multiprocessing import Pool
import logging
import sys
import time

from multiprocessing_logging import install_mp_handler

logging.basicConfig(stream=sys.stdout, level=logging.DEBUG)
install_mp_handler()

logger = logging.getLogger()

def func():
    time.sleep(0.5)

if __name__ == '__main__':
    while True:
        logger.info('a message')

        pool = Pool(2)

        for i in range(2):
            pool.apply_async(func)

        pool.close()
        pool.join()

The issue happens:

  • On python 3.5, 3.7, and 3.9 (Amazon Linux and ArchLinux)
  • Regardless of whether using multiprocessing.Pool or concurrent.futures.ProcessPoolExecutor.

Am I missing something really obvious here?

Log entries only at the end

My log entries are only written to the text file when my program is finished. Am I doing something wrong or is that normal?

multiprocess_logger.py format error

Log statement:
num_alerts = len(self.__alerts)
logger.info(
"Writing {} messages to alerts topic".format(num_alerts))

Line:

record.msg = record.msg % record.args

Exception:
Traceback (most recent call last): File "/opt/company/jarvis/logger_config/multiprocess_logger.py", line 86, in emit s = self._format_record(record) File "/opt/company/jarvis/logger_config/multiprocess_logger.py", line 43, in _format_record record.msg = record.msg % record.args TypeError: not all arguments converted during string formatting

Add support for `spawn` and `forkserver` start methods

Thanks for the library! As of Python 3.8, the default start method for multiprocessing processes on macOS is spawn; see here.

Unfortunately, as noted in #40 and #28, the current implementation of multiprocessing-logging does not support the spawn start method. It seems forkserver has the same type of issue.

Suggestion is to add support for spawn and forkserver methods, or update documentation to note that the fork start method should be used.


Example to reproduce:

import logging
import multiprocessing
import time

from multiprocessing_logging import install_mp_handler

logging.basicConfig(
    filename="output.txt",
    format="%(asctime)s: %(message)s",
    filemode="w",
    level=logging.INFO,
)
install_mp_handler()


def worker(x):
    logging.info(f"sleeping {x} seconds")
    time.sleep(x)


if __name__ == "__main__":
    logging.info("start")

    # https://docs.python.org/3/library/multiprocessing.html#multiprocessing.get_context
    ctx = multiprocessing.get_context("spawn")
    with ctx.Pool(4) as p:
        p.map(worker, [1, 2, 3, 4])

    logging.info("stop")

spawn or forkserver start methods result in the following output:

2020-10-14 23:05:36,979: sleepi2020-10-14 23:05:41,030: stop

fork start method results in the following output:

2020-10-14 23:09:59,094: start
2020-10-14 23:09:59,103: sleeping 1 seconds
2020-10-14 23:09:59,103: sleeping 2 seconds
2020-10-14 23:09:59,104: sleeping 3 seconds
2020-10-14 23:09:59,105: sleeping 4 seconds
2020-10-14 23:10:03,169: stop

self.queue.qsize() growing indefinitely

Not sure what is causing the problem, but randomly my sub_handler.emit() stops being called.

Also, I found that if I add print(self.queue.qsize()) in the emit() of MultiProcessingHandler, and I see qsize() start growing indefinitely right after my sub_handler's emit() is called for the last time.

Any idea what might be happening? I'd be happy to provide more detail if its deemed helpful.

def emit(self, record):
    try:
          s = self._format_record(record)
          print(self.queue.qsize()) # grows indefinitely after sub_handler.emit() stops getting called
          self._send(s)
    except (KeyboardInterrupt, SystemExit):
          raise
    except:
          self.handleError(record)

Why using a Queue rather than a simple Lock?

Hi. :)

I have a quick question regarding the implementation of MultiProcessingHandler: why did you choose to rely on multiprocessing.Queue() rather than using a multiprocessing.Lock()?

This is basically a duplicate of this StackOverflow question: Multiprocessing logging: Lock vs Queue

There is many discussions about multiprocessing and logging in Python, most of them suggest to use multiprocessing.Queue() alongside a thread for the receiving loop. This looks like a complicated implementation compared to simply wrapping the emit() calls with a lock.
I see that there is advantages, as it allows non-blocking handler. But this is about "multiprocessing logging", not "non-blocking logging", right? Others handlers are blocking by default anyway.

I wanted to know you opinion, what are the rational behind this choice?

conda-forge package

Hi @jruere

Was wondering if you would consider creating a conda-forge package for multiprocessing-logging. Happy to collaborate on it if you'd like.

shutdown delay with 0.2.5

I think there may be a regression in 0.2.5, which seems to wait for 5s on all shutdown for me.

This code should probably not execute on the receiving side based on the documentation; indeed the docs suggest it's not needed on the sending side either, since it's the default behaviour.

For reference, here's how logbook does it:
https://github.com/getlogbook/logbook/blob/9d81786f505a5edabc14cfb2b7af6567c219e38e/logbook/queues.py#L493
...but there's no decent example to show how to wire those two together!

My guess is that there's some thread management issue lurking in multiprocessing-logging or there was a process being managed badly. Thoughts?

OSError: handle is closed

Looks like multiprocessing-logging has some issues in the case of a child process crashing. In my case it is the child process:

 OSError: handle is closed
   File "/usr/local/lib/python3.10/site-packages/multiprocessing_logging.py", line 81, in _receive
     record = self.queue.get(timeout=0.2)
 Traceback (most recent call last):
   File "/usr/local/lib/python3.10/site-packages/multiprocessing_logging.py", line 81, in _receive
     record = self.queue.get(timeout=0.2)
 OSError: handle is closed
   File "/usr/local/lib/python3.10/multiprocessing/queues.py", line 113, in get
     if not self._poll(timeout):
 Traceback (most recent call last):
   File "/usr/local/lib/python3.10/multiprocessing/connection.py", line 136, in _check_closed
     raise OSError("handle is closed")
   File "/usr/local/lib/python3.10/site-packages/multiprocessing_logging.py", line 81, in _receive
     record = self.queue.get(timeout=0.2)
 Traceback (most recent call last):

Not sure yet what is the right solution here, but ironically this error message is polluting logs.

Question on integration with libraries

Good morning,
I got a question on how to integrate the package in a python package I'm currently working on.
The package is structured as follow:

./bin/main.py
./src/lib/programs.py
./src/lib/workers.py

The access point to the software is main.py, which also create the logger. Then, depending on the configuration, it passes the arguments to programs.py, that runs the specific subprocesses of interest, described in worker.py.

In worker, I got a situation as follow:

import logging
import subprocess as sp
import multiprocessing as mp
from multiprocessing_logging import install_mp_handler
logger = logging.getLogger(__name__)

# Initialiser for multiprocessing data
def init(args):
    ''' store the counter for later use '''
    global counter
    counter = args
    # Define parallel logging functions
    install_mp_handler()

# Counter object
class Counter(object):
    def __init__(self, initval=0):
        self.val = Value('i', initval)
        self.lock = Lock()
    def increment(self):
        with self.lock:
            self.val.value += 1            
    @property
    def value(self):
        return self.val.value

# Worker function.
def work( in_cmd ):
    global counter
    sp.call(in_cmd, shell=True)
    # += operation is not atomic, so we need to get a lock:
    counter.increment()
    if counter.value == 1:
        logger.info("Done {} job".format( counter.value ))
    else:
        logger.info("Done {} jobs".format( counter.value ))
    return 0


def runParallel(commands, threads):
    ncmds = len(commands)
    if threads > 1:
        # Define parallel counter
        counter = Counter(0)

        logger.info("Start jobs...")

        # Define parallel logging functions
        pool = mp.Pool(initializer = init, initargs = (counter, ), processes=threads)
        #Run the jobs
        pool.map(work, commands)
    else:
        for n, in_cmd in enumerate(commands):
            work( in_cmd )
            if n%10 == 0 and n>0: logger.info("Done {}".format(n))
    logging.info("Completed {} jobs.".format(ncmds))
    return None

Naively, I've tried to add the install_mp_handler() in the init function for the pool call. It didn't work, so i placed it as follow:

       install_mp_handler()
        # Define parallel logging functions
        pool = mp.Pool(initializer = init, initargs = (counter, ), processes=threads)
        #Run the jobs
        pool.map(work, commands)
        logger.info("Start jobs...")

But it didn't work either. So I've placed at the beginning of the software, in the main.py code, as below:

    import logging
    from multiprocessing_logging import install_mp_handler

    # Get arguments
    my_args = parser()

    # Define log format 
    LOGFORMAT = '%(asctime)s : %(levelname)s : %(message)s'
    logging.basicConfig(format=LOGFORMAT, level=my_args.verbose, datefmt="%Y-%m-%d %H:%M:%S")
    install_mp_handler()

But still it didn't work. I'm not sure what I am doing wrong, could you please give me some suggestion about it?
Thank you in advance
Andrea

Does not work at all for subprocesses

I'm trying to use this multiprocessing-logging in Python 2.7.14 (32-bit) on Windows 10 (64-bit). It doesn't seem to work at all for the subprocesses. Looking at your code, I don't see how it could work, not just on Windows but anywhere!

If you call multiprocessing_logging.install_mp_handler() inside if __name__ == '__main__': then only the first main process will know about the re-mapped logging, and only it will run the Queue receiver thread. If you call multiprocessing_logging.install_mp_handler() outside of if __name__ == '__main__': then then, from my reading of your code, every sub-process will know about the re-mapped logging, but every sub-process will also have a receiver thread running. This seems fundamentally incorrect and I cannot understand how it could work anywhere. Since you claim it works on Linux and Windows, there's obviously something that I'm not understanding about your implementation and how to use it.

I've attached my very simple test program as mp_exp.py.txt (since GitHub didn't support a .py file type) and the resulting log file for running this program with an arg of 10.

mp_exp.py.txt
mp_exp.log

If I've made some stupid mistake in how I'm using this, please help me fix it.

Please add an example program to the distribution

It would be helpful to have a test program.

  1. with pool
  2. without pool
    a. Subprocess in Daemon mode
    b. Subprocess not in Daemon mode.

if I can figure out how to make it work; I can help with this.

On my standard programs; I usually configure logging this way:

    
    if DEBUG == None:
        DEBUG=getattr(configuration,'debug')
    
    if VERBOSE == None:
        VERBOSE = True
         
    if DRYRUN == None:
        DRYRUN = getattr(configuration,'dryrun')
        
    if LOGLEVEL == None:
        LOGLEVEL = getattr(configuration,'log_level')
        
    if getattr(configuration,'log_file'):
        LOGFILE = getattr(configuration,'log_file')
        
    if LOG_FACILITY == None:
        LOG_FACILITY = getattr(configuration,'log_facility')
        
    if getattr(configuration,'log') != None:
        import logging
        import logging.config
        import logging.handlers
        
        # Set Formatting
        LOG_FORMAT = '%(asctime)s:%(name)s:%(funcName)s:%(levelname)s:%(message)s'
        LOG_DATE = '%m/%d/%Y %I:%M:%S %p'
        LOG_STYLE = style='%'
        LEVEL = getattr(configuration,'loglevel')
        
        
        if not 'Linux' in os.uname()[0]:
            LOG_SOCKET = '/var/run/syslog'
        else:
            LOG_SOCKET = '/dev/log'

        
        # create logger
        # set name as desired if not in a module
        logger = logging.getLogger(__log_name__ + ":" + __name__)
        logger.setLevel(LEVEL)
        
        # create handlers for Console and set level
        CONSOLE = logging.StreamHandler()
        CONSOLE.setLevel(logging.DEBUG)
        
        #create handlers for Syslog and set level
        SYSLOG = logging.handlers.SysLogHandler(address=LOG_SOCKET, facility=LOG_FACILITY)
        SYSLOG.setLevel(logging.INFO)

        #create handler for FILENAME and set level
        LOG_FILE = logging.FileHandler(LOGFILE,mode='a', encoding=None, delay=False)
        LOG_FILE.setLevel(logging.INFO)
        # create formatter
        formatter = logging.Formatter(LOG_FORMAT)

        # add formatter(s) to handlers
        CONSOLE.setFormatter(formatter)
        SYSLOG.setFormatter(formatter)
        LOG_FILE.setFormatter(formatter)
        
        # add handlers to logger
        if getattr(configuration,'log') == 'console':
            logger.addHandler(CONSOLE)
            
        if getattr(configuration,'log') == 'syslog':
            logger.addHandler(SYSLOG)    
            
        if getattr(configuration,'log') == 'file':
            logger.addHandler(LOG_FILE)
            
        logger.info('{0} started  {1} Logging Enabled'.format(__prog_name__,getattr(configuration,'log')))
        logger.debug('CLI Parameters {0}'.format(configuration))   

Deadlock / hangup under Python 3.9

I have recently switched to Python 3.9, and code that previously worked with multiprocessing_logging suddenly started hanging.
Given the code

with Pool(num_processes=8) as pool:
    for res in pool.imap_unordered(fn, inputs):
        do_something(res)
    pool.close()
    pool.join()

, I get the following two errors. Sometimes it hangs in join():

 File "/home/user/miniconda3/envs/wikifier/lib/python3.9/multiprocessing/pool.py", line 666, in join
   p.join()                                                                     
 File "/home/user/miniconda3/envs/wikifier/lib/python3.9/multiprocessing/process.py", line 149, in join
   res = self._popen.wait(timeout)                                              
 File "/home/user/miniconda3/envs/wikifier/lib/python3.9/multiprocessing/popen_fork.py", line 43, in wait
   return self.poll(os.WNOHANG if timeout == 0.0 else 0)                        
 File "/home/user/miniconda3/envs/wikifier/lib/python3.9/multiprocessing/popen_fork.py", line 27, in poll
   pid, sts = os.waitpid(self.pid, flag)

, sometimes in imap:

File "/home/user/miniconda3/envs/wikifier/lib/python3.9/site-packages/tqdm/std.py", line 1185, in __iter__
  for obj in iterable:                                                         
File "/home/user/miniconda3/envs/wikifier/lib/python3.9/multiprocessing/pool.py", line 858, in next
  self._cond.wait(timeout)                                                     
File "/home/user/miniconda3/envs/wikifier/lib/python3.9/threading.py", line 312, in wait
  waiter.acquire()                                                             

There are also times when it finishes without a problem, but not every time. As I remember, I never had this issue in Python 3.8, but it is entirely possible that it is the underlying operating system version that is to blame.

On the other hand, logging from pool processes now works without install_mp_handler, which didn't before, so maybe now multiprocessing itself supports this functionality?

OS: Linux 5.4.0-77-generic #86-Ubuntu SMP x86_64 GNU/Linux (in an LXC container)
Python: 3.9.5 (via conda)

Error on 21.04 LTS

Switched to 21.04 LTS with python 3.10.4

Get this:

May 22 07:04:05 tychocam authbind[7536]: --- Logging error ---
May 22 07:04:05 tychocam authbind[7536]: Traceback (most recent call last):
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/logging/handlers.py", line 73, in emit
May 22 07:04:05 tychocam authbind[7536]: if self.shouldRollover(record):
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/logging/handlers.py", line 197, in shouldRollover
May 22 07:04:05 tychocam authbind[7536]: self.stream.seek(0, 2) #due to non-posix-compliant Windows feature
May 22 07:04:05 tychocam authbind[7536]: ValueError: I/O operation on closed file.
May 22 07:04:05 tychocam authbind[7536]: Call stack:
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/threading.py", line 966, in _bootstrap
May 22 07:04:05 tychocam authbind[7536]: self._bootstrap_inner()
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/threading.py", line 1009, in _bootstrap_inner
May 22 07:04:05 tychocam authbind[7536]: self.run()
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/threading.py", line 946, in run
May 22 07:04:05 tychocam authbind[7536]: self._target(*self._args, **self._kwargs)
May 22 07:04:05 tychocam authbind[7536]: File "/var/www/FlaskApp/main_program/mainApp/cloud_processing/app_cloud.py", line 127, in start_cloud_app
May 22 07:04:05 tychocam authbind[7536]: with get_context("fork").Pool(3) as pl:
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/multiprocessing/context.py", line 119, in Pool
May 22 07:04:05 tychocam authbind[7536]: return Pool(processes, initializer, initargs, maxtasksperchild,
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/multiprocessing/pool.py", line 212, in init
May 22 07:04:05 tychocam authbind[7536]: self._repopulate_pool()
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/multiprocessing/pool.py", line 303, in _repopulate_pool
May 22 07:04:05 tychocam authbind[7536]: return self._repopulate_pool_static(self._ctx, self.Process,
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/multiprocessing/pool.py", line 326, in _repopulate_pool_static
May 22 07:04:05 tychocam authbind[7536]: w.start()
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/multiprocessing/process.py", line 121, in start
May 22 07:04:05 tychocam authbind[7536]: self._popen = self._Popen(self)
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/multiprocessing/context.py", line 277, in _Popen
May 22 07:04:05 tychocam authbind[7536]: return Popen(process_obj)
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 19, in init
May 22 07:04:05 tychocam authbind[7536]: self._launch(process_obj)
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/multiprocessing/popen_fork.py", line 71, in _launch
May 22 07:04:05 tychocam authbind[7536]: code = process_obj._bootstrap(parent_sentinel=child_r)
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/multiprocessing/process.py", line 315, in _bootstrap
May 22 07:04:05 tychocam authbind[7536]: self.run()
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
May 22 07:04:05 tychocam authbind[7536]: self._target(*self._args, **self._kwargs)
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/multiprocessing/pool.py", line 125, in worker
May 22 07:04:05 tychocam authbind[7536]: result = (True, func(*args, **kwds))
May 22 07:04:05 tychocam authbind[7536]: File "/usr/lib/python3.10/multiprocessing/pool.py", line 48, in mapstar
May 22 07:04:05 tychocam authbind[7536]: return list(map(*args))
May 22 07:04:05 tychocam authbind[7536]: File "/var/www/FlaskApp/main_program/mainApp/cloud_processing/app_cloud.py", line 396, in process_image
May 22 07:04:05 tychocam authbind[7536]: logger.debug('saving plot to cloud_stack: ' + filename)
May 22 07:04:05 tychocam authbind[7536]: Message: 'saving plot to cloud_stack: 1653221036'

Not working in macOSX

cannot pass the test on macOSX 11.2.2

reporting: AttributeError: Can't pickle local object 'WhenMultipleProcessesLogRecords.test_then_records_should_not_be_garbled..worker'

A logging decorator that adds worker name in log messages

Hello,

Thanks for your modiule, it saved me some headaches.

I wrote a decorator for multiprocessing_logging package that adds the current process name to logs, so it becomes clear who logs what.
It also runs install_mp_handler() so it becomes unuseful to run it before creating a pool.

This allows me to see which worker creates which logs.
Tested under Windows 10 x64.

If you find it worthy, I can do a PR.
Best regards.

	import sys
	import logging
	from functools import wraps
	import multiprocessing
	import multiprocessing_logging

	# Setup basic console logger as 'logger'
	logger = logging.getLogger()
	console_handler = logging.StreamHandler(sys.stdout)
	console_handler.setFormatter(logging.Formatter(u'%(asctime)s :: %(levelname)s :: %(message)s'))
	logger.setLevel(logging.DEBUG)
	logger.addHandler(console_handler)


	# Create a decorator for functions that are called via multiprocessing pools
	def logs_mp_process_names(fn):
		class MultiProcessLogFilter(logging.Filter):
			def filter(self, record):
				try:
					process_name = multiprocessing.current_process().name
				except BaseException:
					process_name = __name__
				record.msg = f'{process_name} :: {record.msg}'
				return True

		multiprocessing_logging.install_mp_handler()
		f = MultiProcessLogFilter()

		# Wraps is needed here so apply / apply_async know the function name
		@wraps(fn)
		def wrapper(*args, **kwargs):
			logger.removeFilter(f)
			logger.addFilter(f)
			return fn(*args, **kwargs)

		return wrapper


	# Create a test function and decorate it
	@logs_mp_process_names
	def test(argument):
		logger.info(f'test function called via: {argument}')


	# You can also redefine undecored functions
	def undecorated_function():
		logger.info('I am not decorated')


	@logs_mp_process_names
	def redecorated(*args, **kwargs):
		return undecorated_function(*args, **kwargs)


	# Enjoy
	if __name__ == '__main__':
		with multiprocessing.Pool() as mp_pool:
			# Also works with apply_async
			mp_pool.apply(test, ('mp pool',))
			mp_pool.apply(redecorated)
			logger.info('some main logs')
			test('main program')

feature request: uninstall_mp_handler()

It's common to have a parent process initiate multiprocessing and then still perform logging afterwards. I'd like to request an uninstall_mp_handler() function that would be the mirror of install_mp_handler(), so the parent process could continue after the completion of multiprocessing jobs without the overhead of a synchronized queue.

function to automatically bootstrap

What about applying to all existing configured handlers?

perhaps something like this (untested)?

    import multiprocessing_logging as mp_logging
    i = 0
    for x in list(logger.handlers):
        handler = mp_logging.MultiProcessingHandler('worker-logger-{}'.format(i),
                                                    sub_handler=x)
        logger.removeHandler(x)
        logger.addHandler(handler)
        i += 1

processing.Queue May Got Full And Raise Errors

I got a simple program which yields 2 processes and they start to log infos.

Traceback:

Traceback (most recent call last):
  File "/Users/xiehaochen/opt/anaconda3/envs/py27test/lib/python2.7/site-packages/multiprocessing_logging.py", line 117, in emit
  File "/Users/xiehaochen/opt/anaconda3/envs/py27test/lib/python2.7/site-packages/multiprocessing_logging.py", line 117, in emit
    self._send(s)
  File "/Users/xiehaochen/opt/anaconda3/envs/py27test/lib/python2.7/site-packages/multiprocessing_logging.py", line 98, in _send
    self._send(s)
  File "/Users/xiehaochen/opt/anaconda3/envs/py27test/lib/python2.7/site-packages/multiprocessing_logging.py", line 98, in _send
    self.queue.put_nowait(s)
  File "/Users/xiehaochen/opt/anaconda3/envs/py27test/lib/python2.7/multiprocessing/queues.py", line 155, in put_nowait
    self.queue.put_nowait(s)
  File "/Users/xiehaochen/opt/anaconda3/envs/py27test/lib/python2.7/multiprocessing/queues.py", line 155, in put_nowait
    return self.put(obj, False)
    return self.put(obj, False)
  File "/Users/xiehaochen/opt/anaconda3/envs/py27test/lib/python2.7/multiprocessing/queues.py", line 102, in put
    raise Full
Full
import logging
import logging.handlers
import multiprocessing
import time
import os
import multiprocessing_logging


print 'current pid: {0}'.format(os.getppid())

log_dir = os.path.abspath(os.path.dirname(__file__) + '/logs')
for log_file in os.listdir(log_dir):
    os.remove(log_dir + '/' + log_file)

logger = logging.getLogger()
logfile = os.path.basename(__file__).split('.')[0] + '.log'
logger.setLevel(logging.INFO)
fh = logging.handlers.TimedRotatingFileHandler('{0}/{1}'.format(log_dir, logfile), when='S')
fh.setLevel(logging.DEBUG)
formatter = logging.Formatter('[%(asctime)s] %(levelname)s %(processName)s %(process)d %(thread)d %(threadName)s %(filename)s:%(lineno)d -> %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)

multiprocessing_logging.install_mp_handler()

def test_func():
    i = 0

    while True:
        logging.info(i)
        i += 1

def main_logic(num_process=1):
    processes = []

    for _ in range(num_process):
        processes.append(multiprocessing.Process(target=test_func))

    for process in processes:
        process.start()

    time.sleep(2)
    print('week up here')
    for process in processes:
        process.terminate()

def analyze_logs(num_process=1):
    pass

def main(num_process=1):
    main_logic(num_process)
    analyze_logs(num_process)


if __name__ == '__main__':
    main(num_process=2)

How to use this module

Hi, I try to use this module according to the readme, but the logs seem to be messed like when I don't use this module. What I am doing wrong?

import logging
from multiprocessing import Pool
from multiprocessing_logging import install_mp_handler
import functions
import globals as g

# init logging
logging.basicConfig(format='%(asctime)s - %(levelname)s - %(message)s', level=logging.INFO)
install_mp_handler()

# init multi processing pool
pool = Pool(g.args.parallel)

results = pool.map(functions.backup_image, [image for key, image in sorted(g.images.iteritems())])

pool.close()

and in functions.backup_image I use:

logging.info('Backup persistent image %d:%s attached to VM %d:%s as disk %d' % (
                image.ID, image.NAME, vmId, vm.NAME, vmDiskId))

Works for me on Windows

Thank you for this very accessible multiprocessing logging implementation! Really saved me a headache on a project where I needed to know what was going on while running a multiprocess map (via multiprocess.dummy.Pool).

This item in the README gave me some pause:

It does not work on Windows and I believe it could not work.

But I found it to be untrue (at least running 32-bit Python 2.7.13 on Windows 10). I was curious as to where the caveat came from because I didn't notice anything in the source that would be outright incompatible on Windows.

Logging using importlib doesn't work

I am currently using multiprocessing.Pool to read and prepare data for a machine learning task. I tried to install the logging hook right after my program starts. I first create a logger using the logging.basicConfig method and call install_mp_logger afterwards. After that I create multiple Pools and pass down the initializer argument.

I do, however, load files dynamically using importlib.load_module. Each of the loaded modules initializes a logger right at the start of the file:

# import statements
log = logging.getLogger(__name__)

# method that gets called after the module has been loaded
def called_from_imap(file):
   pass

def called_from_main():
    pass

I suspect that the fact that I import the model using importlib prevents this library from working properly. The strange part is though, that each of the imported modules has two functions, the first gets called within the context of a ´Pool.imap´ call, the second one gets called within the same thread as the main function, without creating a Pool. The second one does, however, log everything correctly.

Any idea what the problem might be?

Python: 3.6.8
OS: Windows

Full tracebacks written instead of single log

In python 3.9.0 with multiprocessing 0.3.3 we keep getting full tracebacks printed instead of single logs

        install_mp_handler()
        _calculate_several = partial(self._calculate_several_unpack)
        with mp.Pool(processes=len(inputs), initializer=install_mp_handler) as pool:
            results = pool.map(_calculate_several, inputs)
        pool.join()

It seems to print the full traceback for each individual message it tries to log from one of the installed packages:

--- Logging error ---
Traceback (most recent call last):
  File "/usr/local/lib/python3.9/site-packages/multiprocessing_logging.py", line 115, in emit
    self._send(s)
  File "/usr/local/lib/python3.9/site-packages/multiprocessing_logging.py", line 96, in _send
    self.queue.put_nowait(s)
  File "/usr/local/lib/python3.9/multiprocessing/queues.py", line 138, in put_nowait
    return self.put(obj, False)
  File "/usr/local/lib/python3.9/multiprocessing/queues.py", line 88, in put
    raise ValueError(f"Queue {self!r} is closed")
ValueError: Queue <multiprocessing.queues.Queue object at 0x7f29c9bcf130> is closed
Call stack:
  File "/opt/code/entrypoints/entrypoint_calculation.py", line 118, in <module>
    ....
  File "/usr/local/lib/python3.9/site-packages/train/cell/dot_product_rnn_cell.py", line 79, in __init__
    logger.info("Created Cell")
  File "/usr/local/lib/python3.9/logging/__init__.py", line 1446, in info
    self._log(INFO, msg, args, **kwargs)
  File "/usr/local/lib/python3.9/logging/__init__.py", line 1589, in _log
    self.handle(record)
  File "/usr/local/lib/python3.9/logging/__init__.py", line 1599, in handle
    self.callHandlers(record)
  File "/usr/local/lib/python3.9/logging/__init__.py", line 1661, in callHandlers
    hdlr.handle(record)
  File "/usr/local/lib/python3.9/logging/__init__.py", line 952, in handle
    self.emit(record)
  File "/usr/local/lib/python3.9/site-packages/multiprocessing_logging.py", line 119, in emit
    self.handleError(record)
Message: 'Created Cell'
Arguments: ()

Does not log multiprocesses

I call: multiprocessing_logging.install_mp_handler(logger)
After instantiating my logger, but nothing logs using the following test code:

def util_func(a, logger):
    print(f"Printing a to the log: {a}")    # prints to screen 
    logger.info(f"Printing a to the log: {a}")   # does NOT print to the log when called using p.map

# main code
p = Pool(2)
p.map(util_func, range(4))
p.close()
p.join()
p.clear()

The logger works when I simply 'logger.info('test123')' but not when multiprocessing. Please advise what I'm doing wrong and how I can fix it...
Thank you!

Which process does one add the handler to?

I have a main process that launches a pool of processes, each does some loggins. The main process does logging initially, starts the pool and just waits until all processes in the pool have finished.
I redirect stdout and stderr of the main process to a given file, and the pool's processes get the same stdout and stderr. Output gets jumbled

The README says to add the handler.
Do I do that in each process of the pool, or in the main process, or in both?
How do I log to stdout/stderr ?

Can you release new version to PyPi?

Hello,

It seems you merged a fix for lack of BrokenPipeError exception missing in python 2 on Dec of 2020, but you never released it to pypi - it seems the last version there is 3.1 from April that year.

Could you bump the version to 3.2 and release it to pypi?

Thanks!

how to use same logger when passed to a different process across all modules used ?

i create one logger with different handlers using the MultiProcessingHandler function and pass it to the different processes i create. my processes have a long workflow and i call many functions from different modules which i need to do some logs in. i currently pass the logger to each function or class i want to do logs on. i dont feel that it is very convenient to keep passing the logger all around.

so how can i use the logger i passed to the new process where ever needed without passing it around?

code :

from utils.multiprocessing_logging import MultiProcessingHandler

class InfoFilter(logging.Filter):
    def filter(self, rec):
        return rec.levelno == logging.INFO

# logger handler config
logger = logging.getLogger(__name__)
e_handler = logging.FileHandler(config.main_config['save_dir']+'log/'+config.main_config['error_log_file'])
e_handler.setLevel(logging.ERROR)
e_format = logging.Formatter('%(asctime)s - %(processName)s - %(levelname)s - %(message)s')
e_handler.setFormatter(e_format)

i_handler = logging.FileHandler(config.main_config['save_dir']+'log/'+config.main_config['info_log_file'])
i_handler.setLevel(logging.INFO)
i_format = logging.Formatter('%(asctime)s - %(processName)s - %(levelname)s - %(message)s')
i_handler.setFormatter(i_format)
i_handler.addFilter(InfoFilter())

# init logger handler
e_handler = MultiProcessingHandler(
    'mp-error-handler',e_handler)
i_handler = MultiProcessingHandler(
    'mp-info-handler',i_handler)
logger = logging.Logger('root')
logger.addHandler(e_handler)
logger.addHandler(i_handler)

process = mp.Process(target = start_tracking, args=(queue, logger,))
process.start()

pytest hangs on python 3.10

multiprocessing-logging 0.3.4
Python 3.10.6

Hi.
Got the issue when tests are passed but hung before finishing.

Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/util.py", line 360, in _exit_function
    _run_finalizers()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 300, in _run_finalizers
    finalizer()
  File "/usr/lib/python3.10/multiprocessing/util.py", line 224, in __call__
    res = self._callback(*self._args, **self._kwargs)
  File "/usr/lib/python3.10/multiprocessing/queues.py", line 199, in _finalize_join
    thread.join()
  File "/usr/lib/python3.10/threading.py", line 1096, in join
    self._wait_for_tstate_lock()
  File "/usr/lib/python3.10/threading.py", line 1116, in _wait_for_tstate_lock
    if lock.acquire(block, timeout):
KeyboardInterrupt: 
Exception ignored in atexit callback: <bound method finalize._exitfunc of <class 'weakref.finalize'>>
Traceback (most recent call last):
  File "/usr/lib/python3.10/weakref.py", line 657, in _exitfunc
    pending = cls._select_for_exit()
  File "/usr/lib/python3.10/weakref.py", line 638, in _select_for_exit
    L = [(f,i) for (f,i) in cls._registry.items() if i.atexit]
  File "/usr/lib/python3.10/weakref.py", line 638, in <listcomp>
    L = [(f,i) for (f,i) in cls._registry.items() if i.atexit]
KeyboardInterrupt: 

Also exceptions during tests:

    File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner
      self.run()
    File "/usr/lib/python3.10/threading.py", line 953, in run
      self._target(*self._args, **self._kwargs)
    File "/lib/python3.10/site-packages/multiprocessing_logging.py", line 77, in _receive
      record = self.queue.get(timeout=0.2)
    File "/usr/lib/python3.10/multiprocessing/queues.py", line 113, in get
      if not self._poll(timeout):
    File "/usr/lib/python3.10/multiprocessing/connection.py", line 257, in poll
      return self._poll(timeout)
    File "/usr/lib/python3.10/multiprocessing/connection.py", line 424, in _poll
      r = wait([self], timeout)
    File "/usr/lib/python3.10/multiprocessing/connection.py", line 931, in wait
      ready = selector.select(timeout)
    File "/usr/lib/python3.10/selectors.py", line 416, in select
      fd_event_list = self._selector.poll(timeout)
  OverflowError: timeout is too large
  
    warnings.warn(pytest.PytestUnhandledThreadExceptionWarning(msg))

handle is closed error in Python 3.9.13

When using multiprocessing-logging in tandem with some code that has a Pool, I'm seeing a new error when upgrading from Python 3.9.12 to 3.9.13. The error messages are like:

File "D:\a\cellfinder\cellfinder\.tox\py39-\lib\site-packages\multiprocessing_logging.py", line 81, in _receive
  record = self.queue.get(timeout=0.2)
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\multiprocessing\queues.py", line 113, in get
  if not self._poll(timeout):
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\multiprocessing\connection.py", line 260, in poll
  self._check_closed()
OSError: [WinError 6] The handle is invalid
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\multiprocessing\connection.py", line 335, in _poll
  return bool(wait([self], timeout))
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\multiprocessing\connection.py", line 888, in wait
  ov.cancel()
File "C:\hostedtoolcache\windows\Python\3.9.13\x64\lib\multiprocessing\connection.py", line 335, in _poll
  return bool(wait([self], timeout))
File "D:\a\cellfinder\cellfinder\.tox\py39-\lib\site-packages\multiprocessing_logging.py", line 81, in _receive
  record = self.queue.get(timeout=0.2)
OSError: [WinError 6] The handle is invalid
OSError: handle is closed

and they go on pretty much indefinitely because they're in an infinite while loop in multiprocessing_logging.py. I'm afraid I haven't had time to try and get a reproducible example together, but I can confirm this is definitely isn't an issue on Python 3.9.12 and is an issue on 3.9.13. Looking down the Python changelog, my best guess is it's python/cpython#31913 that has caused this.

To fix this, perhpas the exception handling in the lines below needs to be updated to catch the OSError being thrown?

except (KeyboardInterrupt, SystemExit):
raise
except (BrokenPipeError, EOFError):
break
except queue.Empty:
pass # This periodically checks if the logger is closed.
except:
traceback.print_exc(file=sys.stderr)

Not work properly with RotatingFileHandler

Hi, I try this tool with multiprocessing package and RotatingFileHandler, but some log messages missing (about 2%)

import glob
import logging
import multiprocessing
import time
from logging.handlers import RotatingFileHandler
import multiprocessing_logging

logger = logging.getLogger('test')
logger.setLevel(logging.DEBUG)
handler = RotatingFileHandler("test.log", maxBytes=10**4, backupCount=10)
handler.setLevel(logging.DEBUG)
logger.addHandler(handler)

multiprocessing_logging.install_mp_handler(logger)

logs = xrange(1000)

def log_log(message):
    logger.debug(message)
    time.sleep(0.001)

nproc = 10
pool = multiprocessing.Pool(processes=nproc)
try:
    pool.imap_unordered(log_log, logs)
    pool.close()
    pool.join()
except KeyboardInterrupt:
    print 'Keyboard interrupt, stopping processing...'
    pool.terminate()
    pool.join()
except Exception, e:
    print 'Exception: %r, stopping processing...' % (e,)
    pool.terminate()
    pool.join()
    raise

log_files = glob.glob("test.log*")
messages = 0
for f in log_files:
    with open(f) as fp:
        messages += len(fp.readlines())
print messages

logging working with pool on windows !

At present I'm using your pypi package on windows using a pool. When sending the install_handler function to the pool as intializer, I do not see any problems. Logging simply works, also from the child processes. Just wanted to let you know!

import logging
import multiprocessing

logger = logging.getLogger(__name__)
import multiprocessing_logging

[...]

pool = multiprocessing.Pool(processes=8, maxtasksperchild=20,  
initializer=multiprocessing_logging.install_mp_handler )

[...]

Logging is not flushed

Unlike the normal logging, which flushes after every single call, it would appear that the multiprocessing logging does not get flushed after every call. If this is intentional (due to the multiprocessing constraints/purpose), I think it should be documented. Maybe a function to flush the log buffer would be nice as well. This caused errors in some of my unit tests to check if logging was working.

The script below fails (and after it fails, it does write to the file). If you comment out the multiprocessing handler, it succeeds.

from datetime import datetime
import logging
import os
import time

import multiprocessing_logging

path = f"/tmp/{datetime.now().strftime('%Y_%m_%d_%M_%S.%f')}.log"
if os.path.exists(path):
    os.remove(path)
print(path)

logging.root.handlers = []
logging.basicConfig(level=logging.INFO,
                    format='%(asctime)s-%(levelname)s: %(message)s',
                    handlers=[logging.StreamHandler(),
                              logging.FileHandler(path)])
logging.captureWarnings(True)
multiprocessing_logging.install_mp_handler()

log_str = "test"
logging.info(log_str)
with open(path, "r") as f:
    assert log_str in f.read()
raise Exception

Does not work on Windows?

Hi, it seems that it does not work on Windows, see the following code. I did what the documentation said:

  1. configure logging
  2. install_mp_handler
  3. profit (start logging, start threads)

Expected result:

[WARNING] Main
[WARNING] Thread

Actual result:

[WARNING] Main
Thread

This is because logger is not configured on the new Process when we use "spawn" method - the default and only one available on Windows.

import multiprocessing
import logging
import sys
import multiprocessing_logging


def thread_test():
    #logging.basicConfig(stream=sys.stdout, format="[%(levelname)s] %(message)s")
    logging.getLogger().warning("Thread")


def main():
    multiprocessing.set_start_method("spawn")

    logging.basicConfig(stream=sys.stdout, format="[%(levelname)s] %(message)s")
    multiprocessing_logging.install_mp_handler()
    logging.getLogger().warning("Main")

    p = multiprocessing.Process(target=thread_test)
    p.start()
    p.join()


if __name__ == "__main__":
    main()

The only way to make this working for me was to configure logging on the new process, too.

Support python 3.11 ?

It seems that with Python 3.11, sometimes there is the following crash

  File "/usr/local/lib/python3.11/multiprocessing/connection.py", line 378, in _recv
    chunk = read(handle, remaining)
            ^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'NoneType' object cannot be interpreted as an integer

You can see more details in oxsecurity/megalinter#2425

Would it be possible to provide a fix ?

Many thanks ! :)

cc @llaville

Causes Deadlock

I believe this is causing a deadlock in my project. Currently I have certain processes which will unexpectedly stop mid-processing with no exceptions, this problem goes away when I disable multiprocessing-logging. From what I have read it may be related to the GIL. Here is a relevant discussion I have found, regarding this: https://bugs.python.org/issue27422

I have noticed many people warning not to mix threading and processes, or if you do to thread after you fork. I am not well equipped to debug this any further than I have but I thought I would leave an issue here in case someone else has similar issues.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.