Coder Social home page Coder Social logo

pprofile's Introduction

Line-granularity, thread-aware deterministic and statistic pure-python profiler

Inspired from Robert Kern's line_profiler .

Usage

As a command:

$ pprofile some_python_executable arg1 ...

Once some_python_executable returns, prints annotated code of each file involved in the execution.

As a command, ignoring any files from default sys.path (ie, python modules themselves), for shorter output:

$ pprofile --exclude-syspath some_python_executable arg1 ...

Executing a module, like python -m. --exclude-syspath is not recommended in this mode, as it will likely hide what you intend to profile. Also, explicitly ending pprofile arguments with -- will prevent accidentally stealing command's arguments:

$ pprofile -m some_python_module -- arg1 ...

As a module:

import pprofile

def someHotSpotCallable():
    # Deterministic profiler
    prof = pprofile.Profile()
    with prof():
        # Code to profile
    prof.print_stats()

def someOtherHotSpotCallable():
    # Statistic profiler
    prof = pprofile.StatisticalProfile()
    with prof(
        period=0.001, # Sample every 1ms
        single=True, # Only sample current thread
    ):
        # Code to profile
    prof.print_stats()

For advanced usage, see pprofile --help and pydoc pprofile.

Profiling overhead

pprofile default mode (Deterministic profiling) has a large overhead. Part of the reason being that it is written to be as portable as possible (so no C extension). This large overhead can be an issue, which can be avoided by using Statistic profiling at the cost of some result readability decrease.

Rule of thumb:

Code to profile runs for... Deterministic profiling Statistic profiling
a few seconds Yes No [1]
a few minutes Maybe Yes
more (ex: daemon) No Yes [2]

Once you identified the hot spot and you decide you need finer-grained profiling to understand what needs fixing, you should try to make to-profile code run for shorter time so you can reasonably use deterministic profiling: use a smaller data set triggering the same code path, modify the code to only enable profiling on small pieces of code...

[1]Statistic profiling will not have time to collect enough samples to produce usable output.
[2]You may want to consider triggering pprofile from a signal handler or other IPC mechanism to profile a shorter subset. See zpprofile.py for how it can be used to profile code inside a running (zope) service (in which case the IPC mechanism is just Zope normal URL handling).

Output

Supported output formats.

Callgrind

The most useful output mode of pprofile is Callgrind Profile Format, allows browsing profiling results with kcachegrind (or qcachegrind on Windows).

$ pprofile --format callgrind --out cachegrind.out.threads demo/threads.py

Callgrind format is implicitly enabled if --out basename starts with cachegrind.out., so above command can be simplified as:

$ pprofile --out cachegrind.out.threads demo/threads.py

If you are analyzing callgrind traces on a different machine, you may want to use the --zipfile option to generate a zip file containing all files:

$ pprofile --out cachegrind.out.threads --zipfile threads_source.zip demo/threads.py

Generated files will use relative paths, so you can extract generated archive in the same path as profiling result, and kcachegrind will load them - and not your system-wide files, which may differ.

Annotated code

Human-readable output, but can become difficult to use with large programs.

$ pprofile demo/threads.py

Profiling modes

Deterministic profiling

In deterministic profiling mode, pprofile gets notified of each executed line. This mode generates very detailed reports, but at the cost of a large overhead. Also, profiling hooks being per-thread, either profiling must be enable before spawning threads (if you want to profile more than just the current thread), or profiled application must provide ways of enabling profiling afterwards - which is not very convenient.

$ pprofile --threads 0 demo/threads.py
Command line: ['demo/threads.py']
Total duration: 1.00573s
File: demo/threads.py
File duration: 1.00168s (99.60%)
Line #|      Hits|         Time| Time per hit|      %|Source code
------+----------+-------------+-------------+-------+-----------
     1|         2|  3.21865e-05|  1.60933e-05|  0.00%|import threading
     2|         1|  5.96046e-06|  5.96046e-06|  0.00%|import time
     3|         0|            0|            0|  0.00%|
     4|         2|   1.5974e-05|  7.98702e-06|  0.00%|def func():
     5|         1|      1.00111|      1.00111| 99.54%|  time.sleep(1)
     6|         0|            0|            0|  0.00%|
     7|         2|  2.00272e-05|  1.00136e-05|  0.00%|def func2():
     8|         1|  1.69277e-05|  1.69277e-05|  0.00%|  pass
     9|         0|            0|            0|  0.00%|
    10|         1|  1.81198e-05|  1.81198e-05|  0.00%|t1 = threading.Thread(target=func)
(call)|         1|  0.000610828|  0.000610828|  0.06%|# /usr/lib/python2.7/threading.py:436 __init__
    11|         1|  1.52588e-05|  1.52588e-05|  0.00%|t2 = threading.Thread(target=func)
(call)|         1|  0.000438929|  0.000438929|  0.04%|# /usr/lib/python2.7/threading.py:436 __init__
    12|         1|  4.79221e-05|  4.79221e-05|  0.00%|t1.start()
(call)|         1|  0.000843048|  0.000843048|  0.08%|# /usr/lib/python2.7/threading.py:485 start
    13|         1|  6.48499e-05|  6.48499e-05|  0.01%|t2.start()
(call)|         1|   0.00115609|   0.00115609|  0.11%|# /usr/lib/python2.7/threading.py:485 start
    14|         1|  0.000205994|  0.000205994|  0.02%|(func(), func2())
(call)|         1|      1.00112|      1.00112| 99.54%|# demo/threads.py:4 func
(call)|         1|  3.09944e-05|  3.09944e-05|  0.00%|# demo/threads.py:7 func2
    15|         1|  7.62939e-05|  7.62939e-05|  0.01%|t1.join()
(call)|         1|  0.000423908|  0.000423908|  0.04%|# /usr/lib/python2.7/threading.py:653 join
    16|         1|  5.26905e-05|  5.26905e-05|  0.01%|t2.join()
(call)|         1|  0.000320196|  0.000320196|  0.03%|# /usr/lib/python2.7/threading.py:653 join

Note that time.sleep call is not counted as such. For some reason, python is not generating c_call/c_return/c_exception events (which are ignored by current code, as a result).

Statistic profiling

In statistic profiling mode, pprofile periodically snapshots the current callstack(s) of current process to see what is being executed. As a result, profiler overhead can be dramatically reduced, making it possible to profile real workloads. Also, as statistic profiling acts at the whole-process level, it can be toggled independently of profiled code.

The downside of statistic profiling is that output lacks timing information, which makes it harder to understand.

$ pprofile --statistic .01 demo/threads.py
Command line: ['demo/threads.py']
Total duration: 1.0026s
File: demo/threads.py
File duration: 0s (0.00%)
Line #|      Hits|         Time| Time per hit|      %|Source code
------+----------+-------------+-------------+-------+-----------
     1|         0|            0|            0|  0.00%|import threading
     2|         0|            0|            0|  0.00%|import time
     3|         0|            0|            0|  0.00%|
     4|         0|            0|            0|  0.00%|def func():
     5|       288|            0|            0|  0.00%|  time.sleep(1)
     6|         0|            0|            0|  0.00%|
     7|         0|            0|            0|  0.00%|def func2():
     8|         0|            0|            0|  0.00%|  pass
     9|         0|            0|            0|  0.00%|
    10|         0|            0|            0|  0.00%|t1 = threading.Thread(target=func)
    11|         0|            0|            0|  0.00%|t2 = threading.Thread(target=func)
    12|         0|            0|            0|  0.00%|t1.start()
    13|         0|            0|            0|  0.00%|t2.start()
    14|         0|            0|            0|  0.00%|(func(), func2())
(call)|        96|            0|            0|  0.00%|# demo/threads.py:4 func
    15|         0|            0|            0|  0.00%|t1.join()
    16|         0|            0|            0|  0.00%|t2.join()
File: /usr/lib/python2.7/threading.py
File duration: 0s (0.00%)
Line #|      Hits|         Time| Time per hit|      %|Source code
------+----------+-------------+-------------+-------+-----------
[...]
   308|         0|            0|            0|  0.00%|    def wait(self, timeout=None):
[...]
   338|         0|            0|            0|  0.00%|            if timeout is None:
   339|         1|            0|            0|  0.00%|                waiter.acquire()
   340|         0|            0|            0|  0.00%|                if __debug__:
[...]
   600|         0|            0|            0|  0.00%|    def wait(self, timeout=None):
[...]
   617|         0|            0|            0|  0.00%|            if not self.__flag:
   618|         0|            0|            0|  0.00%|                self.__cond.wait(timeout)
(call)|         1|            0|            0|  0.00%|# /usr/lib/python2.7/threading.py:308 wait
[...]
   724|         0|            0|            0|  0.00%|    def start(self):
[...]
   748|         0|            0|            0|  0.00%|        self.__started.wait()
(call)|         1|            0|            0|  0.00%|# /usr/lib/python2.7/threading.py:600 wait
   749|         0|            0|            0|  0.00%|
   750|         0|            0|            0|  0.00%|    def run(self):
[...]
   760|         0|            0|            0|  0.00%|            if self.__target:
   761|         0|            0|            0|  0.00%|                self.__target(*self.__args, **self.__kwargs)
(call)|       192|            0|            0|  0.00%|# demo/threads.py:4 func
   762|         0|            0|            0|  0.00%|        finally:
[...]
   767|         0|            0|            0|  0.00%|    def __bootstrap(self):
[...]
   780|         0|            0|            0|  0.00%|        try:
   781|         0|            0|            0|  0.00%|            self.__bootstrap_inner()
(call)|       192|            0|            0|  0.00%|# /usr/lib/python2.7/threading.py:790 __bootstrap_inner
[...]
   790|         0|            0|            0|  0.00%|    def __bootstrap_inner(self):
[...]
   807|         0|            0|            0|  0.00%|            try:
   808|         0|            0|            0|  0.00%|                self.run()
(call)|       192|            0|            0|  0.00%|# /usr/lib/python2.7/threading.py:750 run

Some details are lost (not all executed lines have a non-null hit-count), but the hot spot is still easily identifiable in this trivial example, and its call stack is still visible.

Thread-aware profiling

ThreadProfile class provides the same features as Profile, but uses threading.settrace to propagate tracing to threading.Thread threads started after profiling is enabled.

Limitations

The time spent in another thread is not discounted from interrupted line. On the long run, it should not be a problem if switches are evenly distributed among lines, but threads executing fewer lines will appear as eating more CPU time than they really do.

This is not specific to simultaneous multi-thread profiling: profiling a single thread of a multi-threaded application will also be polluted by time spent in other threads.

Example (lines are reported as taking longer to execute when profiled along with another thread - although the other thread is not profiled):

$ demo/embedded.py
Total duration: 1.00013s
File: demo/embedded.py
File duration: 1.00003s (99.99%)
Line #|      Hits|         Time| Time per hit|      %|Source code
------+----------+-------------+-------------+-------+-----------
     1|         0|            0|            0|  0.00%|#!/usr/bin/env python
     2|         0|            0|            0|  0.00%|import threading
     3|         0|            0|            0|  0.00%|import pprofile
     4|         0|            0|            0|  0.00%|import time
     5|         0|            0|            0|  0.00%|import sys
     6|         0|            0|            0|  0.00%|
     7|         1|   1.5974e-05|   1.5974e-05|  0.00%|def func():
     8|         0|            0|            0|  0.00%|  # Busy loop, so context switches happen
     9|         1|  1.40667e-05|  1.40667e-05|  0.00%|  end = time.time() + 1
    10|    146604|     0.511392|  3.48826e-06| 51.13%|  while time.time() < end:
    11|    146603|      0.48861|  3.33288e-06| 48.85%|    pass
    12|         0|            0|            0|  0.00%|
    13|         0|            0|            0|  0.00%|# Single-treaded run
    14|         0|            0|            0|  0.00%|prof = pprofile.Profile()
    15|         0|            0|            0|  0.00%|with prof:
    16|         0|            0|            0|  0.00%|  func()
(call)|         1|      1.00003|      1.00003| 99.99%|# ./demo/embedded.py:7 func
    17|         0|            0|            0|  0.00%|prof.annotate(sys.stdout, __file__)
    18|         0|            0|            0|  0.00%|
    19|         0|            0|            0|  0.00%|# Dual-threaded run
    20|         0|            0|            0|  0.00%|t1 = threading.Thread(target=func)
    21|         0|            0|            0|  0.00%|prof = pprofile.Profile()
    22|         0|            0|            0|  0.00%|with prof:
    23|         0|            0|            0|  0.00%|  t1.start()
    24|         0|            0|            0|  0.00%|  func()
    25|         0|            0|            0|  0.00%|  t1.join()
    26|         0|            0|            0|  0.00%|prof.annotate(sys.stdout, __file__)
Total duration: 1.00129s
File: demo/embedded.py
File duration: 1.00004s (99.88%)
Line #|      Hits|         Time| Time per hit|      %|Source code
------+----------+-------------+-------------+-------+-----------
[...]
     7|         1|  1.50204e-05|  1.50204e-05|  0.00%|def func():
     8|         0|            0|            0|  0.00%|  # Busy loop, so context switches happen
     9|         1|  2.38419e-05|  2.38419e-05|  0.00%|  end = time.time() + 1
    10|     64598|     0.538571|  8.33728e-06| 53.79%|  while time.time() < end:
    11|     64597|     0.461432|  7.14324e-06| 46.08%|    pass
[...]

This also means that the sum of the percentage of all lines can exceed 100%. It can reach the number of concurrent threads (200% with 2 threads being busy for the whole profiled execution time, etc).

Example with 3 threads:

$ pprofile demo/threads.py
Command line: ['demo/threads.py']
Total duration: 1.00798s
File: demo/threads.py
File duration: 3.00604s (298.22%)
Line #|      Hits|         Time| Time per hit|      %|Source code
------+----------+-------------+-------------+-------+-----------
     1|         2|  3.21865e-05|  1.60933e-05|  0.00%|import threading
     2|         1|  6.91414e-06|  6.91414e-06|  0.00%|import time
     3|         0|            0|            0|  0.00%|
     4|         4|  3.91006e-05|  9.77516e-06|  0.00%|def func():
     5|         3|      3.00539|       1.0018|298.16%|  time.sleep(1)
     6|         0|            0|            0|  0.00%|
     7|         2|  2.31266e-05|  1.15633e-05|  0.00%|def func2():
     8|         1|  2.38419e-05|  2.38419e-05|  0.00%|  pass
     9|         0|            0|            0|  0.00%|
    10|         1|  1.81198e-05|  1.81198e-05|  0.00%|t1 = threading.Thread(target=func)
(call)|         1|  0.000612974|  0.000612974|  0.06%|# /usr/lib/python2.7/threading.py:436 __init__
    11|         1|  1.57356e-05|  1.57356e-05|  0.00%|t2 = threading.Thread(target=func)
(call)|         1|  0.000438213|  0.000438213|  0.04%|# /usr/lib/python2.7/threading.py:436 __init__
    12|         1|  6.60419e-05|  6.60419e-05|  0.01%|t1.start()
(call)|         1|  0.000913858|  0.000913858|  0.09%|# /usr/lib/python2.7/threading.py:485 start
    13|         1|   6.8903e-05|   6.8903e-05|  0.01%|t2.start()
(call)|         1|   0.00167513|   0.00167513|  0.17%|# /usr/lib/python2.7/threading.py:485 start
    14|         1|  0.000200272|  0.000200272|  0.02%|(func(), func2())
(call)|         1|      1.00274|      1.00274| 99.48%|# demo/threads.py:4 func
(call)|         1|  4.19617e-05|  4.19617e-05|  0.00%|# demo/threads.py:7 func2
    15|         1|  9.58443e-05|  9.58443e-05|  0.01%|t1.join()
(call)|         1|  0.000411987|  0.000411987|  0.04%|# /usr/lib/python2.7/threading.py:653 join
    16|         1|  5.29289e-05|  5.29289e-05|  0.01%|t2.join()
(call)|         1|  0.000316143|  0.000316143|  0.03%|# /usr/lib/python2.7/threading.py:653 join

Note that the call time is not added to file total: it's already accounted for inside "func".

Why another profiler ?

Python's standard profiling tools have a callable-level granularity, which means it is only possible to tell which function is a hot-spot, not which lines in that function.

Robert Kern's line_profiler is a very nice alternative providing line-level profiling granularity, but in my opinion it has a few drawbacks which (in addition to the attractive technical challenge) made me start pprofile:

  • It is not pure-python. This choice makes sense for performance but makes usage with pypy difficult and requires installation (I value execution straight from checkout).
  • It requires source code modification to select what should be profiled. I prefer to have the option to do an in-depth, non-intrusive profiling.
  • As an effect of previous point, it does not have a notion above individual callable, annotating functions but not whole files - preventing module import profiling.
  • Profiling recursive code provides unexpected results (recursion cost is accumulated on callable's first line) because it doesn't track call stack. This may be unintended, and may be fixed at some point in line_profiler.

pprofile's People

Contributors

dangonite57 avatar flying-sheep avatar jakirkham avatar nagesh4193 avatar pilcru avatar sth avatar vpelletier avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pprofile's Issues

Allow for use of command line parameters

Currently pprofile seems to gather ALL command line parameters it can find, e.g.:

$ pprofile file_metadata/wikibot/log_bot.py -search:'eth-bib' -limit:5 -dry
usage: pprofile [-h] [-o OUT] [-z ZIPFILE] [-t THREADS] [-f {text,callgrind}]
                [-v] [-s STATISTIC] [--exclude-syspath] [--exclude EXCLUDE]
                [--include INCLUDE]
                script
pprofile: error: argument -s/--statistic: invalid float value: 'earch:eth-bib'

As you can see pprofile takes the "-s" from "-search:'eth-bib'" does not understand what to do and by that messes up the command to execute.

I propose to not touch any of the string after and including script. The parameters given after script belong to script and NOT to pprofile. May be it's needed to introduce a parameter to mark from where the actual script command starts. E.g. like ... -c script params-and-stuff-belonging-to-script.

"not defined" errors when importing via "from __main__ import *"

Since I have worked with pprofile in the past and find it the single best Python profiling tool (I particularly love how easy it is to use its Callgrind Profile Format output mode), I just tried it on a few projects I currently have to manage.

However, code execution breaks, stating that several objects and libraries are not defined - all things that are inherited via from __main__ import *. Unfortunately, the specific setup of those projects do require the usage of from __main__ import *. Needless to say, everything works fine if executing the main entry point file via python main.py instead of pprofile main.py.

Would it be at all possible to solve this and make pprofile recognize objects inherited via from __main__ import *? I really hope so, as none of the alternative profiling tools goes as deep as yours.

Clarify how to run the statistical profiler over just a portion of the code?

The README gives

import pprofile

def someHotSpotCallable():
    profiler = pprofile.Profile()
    with profiler:
        # Some hot-spot code
    profiler.print_stats()

as an example. However, it seems(?) not possible to directly swap in a StatisticalProfile, as its docstring says "This class does not gather its own samples by itself. Instead, it must be provided with call stacks (as returned by sys._getframe() or sys._current_frames())." I assume the context manager doesn't take care of setting up a second thread and so on?
Additional pointers in the doc would be welcome, thanks in advance.

Clarify Callgrind Event Types

Hi @vpelletier

Dumping stats to be visualised in CacheGrind gives us 3 costing options for inclusive and self sorting: hits, us, us/hit.

screen shot 2016-12-08 at 09 07 59

Could you clarify exactly what these Event Types are please?

Many thanks!

Limiting the "depth" of the profile

pprofile seems to be a great tool. However, it is giving me too much output. Is there any way to limit the output? Basically, can I request only the times of the lines that I want to profile without going into the subroutines they are calling? More generally, can I control the "depth" into which the profiler will descend into when profiling?

It seems from the README that the recommended method for using pprofile on larger projects is to use (q/k)cachegrind to visualize the results. However, it doesn't quite give me the line-by-line profile that I'm seeking. In particular, there are a number of 'import' statements in my scripts that I know are using up a lot of times, but they cannot be found in the profile at all.

Any help would be much appreciated.

Which versions of Python are supported? Looks like not Python 3.6

I successfully installed this tool via the setup.py script. I would have preferred conda but it was not available at the channels I am subscribed to.

Then I ran this:

>>> import pprofile
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "E:\Downloads\pprofile-master\pprofile.py", line 57, in <module>
    import cStringIO
ModuleNotFoundError: No module named 'cStringIO'
>>>

A check at the documentation showed that it was available for Python 2.7 [0] but not in Python 3 anymore [1]. Could it be possible to change pprofile so that it can work under Python 3.6 as well?

[0] https://docs.python.org/2/library/stringio.html
[1] https://docs.python.org/3.0/whatsnew/3.0.html

wildly incorrect relative performance of generator expression vs map, off by 2 orders of magnitude

Wall time testing gives relative time spent in generator expression, list comprehension, map, and loop doing the same job to be roughly 1.3 : 1.6 : 1 : 1.8, while pprofile gives 102 (!) : 32 : 1 : 60:

Command line: genexpr-vs-map
Total duration: 1098.69s
File: genexpr-vs-map
File duration: 1098.68s (100.00%)
Line #|      Hits|         Time| Time per hit|      %|Source code
------+----------+-------------+-------------+-------+-----------
     1|         0|            0|            0|  0.00%|#!/usr/bin/python3
     2|         0|            0|            0|  0.00%|
     3|         2|  0.000146389|  7.31945e-05|  0.00%|import sys
     4|         0|            0|            0|  0.00%|
     5|         2|  4.60148e-05|  2.30074e-05|  0.00%|def profile_generator_expression():
     6|  20000003|       571.97|  2.85985e-05| 52.06%|    return tuple(str(n) for n in r)
(call)|  10000001|      293.337|  2.93337e-05| 26.70%|# genexpr-vs-map:6 <genexpr>
     7|         0|            0|            0|  0.00%|
     8|         2|  6.38962e-05|  3.19481e-05|  0.00%|def profile_list_comprehension():
     9|  10000003|      181.039|  1.81039e-05| 16.48%|    return tuple([str(n) for n in r])
(call)|         1|      180.307|      180.307| 16.41%|# genexpr-vs-map:9 <listcomp>
    10|         0|            0|            0|  0.00%|
    11|         2|  6.77109e-05|  3.38554e-05|  0.00%|def profile_map():
    12|         1|      5.55134|      5.55134|  0.51%|    return tuple(map(str, r))
    13|         0|            0|            0|  0.00%|
    14|         2|  6.55651e-05|  3.27826e-05|  0.00%|def profile_loop():
    15|         1|  2.71797e-05|  2.71797e-05|  0.00%|    result = []
    16|  10000001|      159.363|  1.59363e-05| 14.50%|    for n in r:
    17|  10000000|      177.635|  1.77635e-05| 16.17%|        result.append(str(n))
    18|         1|     0.415125|     0.415125|  0.04%|    return tuple(result)
    19|         0|            0|            0|  0.00%|
    20|         1|  3.19481e-05|  3.19481e-05|  0.00%|r = range(10 ** 7)
    21|         0|            0|            0|  0.00%|
    22|         1|  2.64645e-05|  2.64645e-05|  0.00%|profile = (
    23|         1|  2.74181e-05|  2.74181e-05|  0.00%|    profile_generator_expression,
    24|         1|  2.64645e-05|  2.64645e-05|  0.00%|    profile_list_comprehension,
    25|         1|   2.6226e-05|   2.6226e-05|  0.00%|    profile_map,
    26|         1|  2.64645e-05|  2.64645e-05|  0.00%|    profile_loop,
    27|         0|            0|            0|  0.00%|)
    28|         0|            0|            0|  0.00%|
    29|         1|  2.95639e-05|  2.95639e-05|  0.00%|if len(sys.argv) == 1:
    30|         1|  2.67029e-05|  2.67029e-05|  0.00%|    arg = '0123'
    31|         0|            0|            0|  0.00%|else:
    32|         0|            0|            0|  0.00%|    arg, = sys.argv[1:]
    33|         0|            0|            0|  0.00%|
    34|         5|   0.00044775|    8.955e-05|  0.00%|for ch in arg:
    35|         4|      2.70997|     0.677493|  0.25%|    profile[int(ch)]()
(call)|         1|       571.97|       571.97| 52.06%|# genexpr-vs-map:5 profile_generator_expression
(call)|         1|       181.04|       181.04| 16.48%|# genexpr-vs-map:8 profile_list_comprehension
(call)|         1|      5.55138|      5.55138|  0.51%|# genexpr-vs-map:11 profile_map
(call)|         1|      337.413|      337.413| 30.71%|# genexpr-vs-map:14 profile_loop

Tested on Debian with python3-pprofile 2.0.5-1 and python3.9 3.9.2-1.

Request simple way to profile a single function without annotating code

I often have programs that do extensive setup and then call a particular function just once that dominates the runtime, so I would like to profile this single function and get cumulative execution times for each line in the that function. I would prefer to do this from the command-line without touching the Python code.

Is there any existing way to do this? If so, could you provide an example? If not, could you add it. I think it would be extremely useful.

I have tried using the code annotation method from the pprofile doc around code within a function that I know is executed during the run:

import pprofile
# Deterministic profiler
prof = pprofile.Profile()
with prof():
  result = my_func()
prof.print_stats()

but it does not produce a single line of output even though when I run pprofile from the commandline it provides voluminous output (and thus does not let me focus on the one function). This is on Windows 10 using Git bash as my shell. Thanks.

Exclude syspath in python script

Hi,
I am trying to profile my script by writing a script that takes the module and profiles it. But unlike command line argument (where I can use exclude syspath to avoid profiling python import libraries), there isn't an option to do in script? If yes, can you help me with it?

% can be greater than 100

I got the following line in output
(call)| 13592| 226.612| 0.0166725|1277.08%|# test.py:1 composite_candidates

Here is the relevent code

def composite_candidates(primes, limit, factors=(1,), num=2, candidates=None, idx=0):
    if candidates == None:
        candidates = {1:1, 2:2, 4:3, 36:9}
    if num < limit:
        # current exponent must be equal or less than previous one
        if len(factors) == 1 or factors[-2] > factors[-1]:
            # we either stay with current prime and increase exponent
            composite_candidates(primes, limit, 
                                 factors[:-1] + (factors[-1] + 1,), 
                                 num * primes[idx], candidates, idx)
        # or move to next prime
        idx += 1
        factors = factors + (1,)
        num *= primes[idx]
        composite_candidates(primes, limit, factors, num, candidates, idx)
        
        # and add the candidate if it might be right
        if num < limit:
            if primes[idx].bit_length()-1 <= factors[0] <= 2 * primes[idx+1].bit_length():
                divisors = num_factors(factors)
                #candidates[num] = divisors
                candidates[num ]= divisors
        
    return candidates
    
def f(limit):
    primes = prime_sieve(2*limit.bit_length() + 5)
    candidates = composite_candidates(primes, limit)
f(2**60)

attribute error related to encoding attribute

Hi,
I am using pprofile to profile a python(Python 2.7.9 :: Anaconda 2.2.0 (64-bit)) script written for Autodesk Maya 2017. print_stats raises an attribute error at line 147 in pprofile.py. That corresponds to line 151 in pprofile.py on github in class EncodeOrReplaceWriter(object)

def __init__(self, out):
        self._encoding = out.encoding or 'ascii'
        self._write = out.write

The error is given below
AttributeError: file D:/pprofile\pprofile.py line 147: 'maya.Output' object has no attribute 'encoding'
I am pretty sure this is something on my side. Any ideas?

Unable to capture metrics for Django application

pprofile 2.1.0
windows 11
python 3.11

Hi Team, Thanks for the tool, i am performing profiling on a sample django application with the following structure

├── server
│ ├── app
│ │ ├── *.py
│ ├── views
│ ├── urls
├── utils
│ ├── some1
│ │ ├── *.py (20 files with subdirectories)
├── manage.py

when i run the application using command line as pprofile --out djangoapp.txt manage.py runserver 0.0.0.0:8001

the generated .txt file contains metrics related to the virtual environment only(the folder created using python -m venv test_env), the modules or third party libraries alone able to capture.

could you please help how to make it work for most of the files. Thanks for your time

print_stats not working on jupyter notebook

>>> profiler.print_stats()
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-229-57c24eb824a6> in <module>()
----> 1 profiler.print_stats()

/Users/elyase/miniconda3/envs/test/lib/python3.5/site-packages/pprofile.py in print_stats(self)
    518         Returns None.
    519         """
--> 520         self.annotate(_reopen(sys.stdout, errors='replace'))
    521 
    522 class ProfileRunnerBase(object):

/Users/elyase/miniconda3/envs/test/lib/python3.5/site-packages/pprofile.py in _reopen(stream, encoding, errors)
     64         # Also, I do not expect many 3.0 and 3.1 to be still used. Feel free to
     65         # report a bug if it raises.
---> 66         return codecs.getwriter(encoding)(stream.buffer, errors=errors)
     67 
     68 def _getFuncOrFile(func, module, line):

AttributeError: 'OutStream' object has no attribute 'buffer'

correct way to profile -only- the code contained in a single script

Hi,

I am looking at pprofile as an alternative to line-profiler;
Could you please tell me whats the correct way of using it so that only the code within a single script of choice is profiled, without recursing into other modules and files called by the script?

Thanks a lot

Make context managers __enter__ return themselves

By making Profile.__enter__ and StatisticalThread.__enter__ return the objects themselves, their use become a tiny bit simpler:

prof = Profile()
with prof: ...
prof.print_stats()

becomes

with Profile() as prof: ...
prof.print_stats()

Likewise, for StatisticalThread, we can have

with StatisticalThread(StatisticalProfile(), ...) as thread: ...
thread.profile.print_stats()

AttributeError: 'OutStream' object has no attribute 'buffer' .. Anaconda Jupiter Python 3.5

Hi - I'm using the Anaconda Python 3.5 distribution as many people do .. with all updates applied via conda updates.

The conda line_profiler doesn't work even though it is in the conda repos.

The profile looked interesting from a technical perspective so I installed it .. via pip install as it isn't in conda.

In a Jupyter notebook it seems to import ok:
import pprofile

Test function:

def work(n):
    print("hello")
    x = n * n
    print(x)
    pass

.. and call the code via the profiler:

profiler = pprofile.Profile()
with profiler:
    work(3)
profiler.print_stats()

But this fails with:

hello
9
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-9-6227c16eb3e5> in <module>()
      2 with profiler:
      3     work(3)
----> 4 profiler.print_stats()

/Users/tariq/anaconda3/lib/python3.5/site-packages/pprofile.py in print_stats(self)
    518         Returns None.
    519         """
--> 520         self.annotate(_reopen(sys.stdout, errors='replace'))
    521 
    522 class ProfileRunnerBase(object):

/Users/tariq/anaconda3/lib/python3.5/site-packages/pprofile.py in _reopen(stream, encoding, errors)
     64         # Also, I do not expect many 3.0 and 3.1 to be still used. Feel free to
     65         # report a bug if it raises.
---> 66         return codecs.getwriter(encoding)(stream.buffer, errors=errors)
     67 
     68 def _getFuncOrFile(func, module, line):

AttributeError: 'OutStream' object has no attribute 'buffer'

Any ideas?

I'd really like to have a profiler working inside the Jupyter notebook -- it's a constraint of developing an easy to use accessible tutorial/guide which avoids source code files and command lines ... in favour of the notebook.

ps .. dump_stats works .. so I guess this to do with stdout/stderr interactions with the notebook ...

profiler.dump_stats(filename="abc.txt")

Values/Counts not displayed properly

"5.96046e-06" isn't really that helpful. If there is limited column space then the values should be rounded to within the appropriate decimal place.

   304|         1|  5.96046e-06|  5.96046e-06|  0.00%|    def select(self, whereclause=None, **params):
   305|         0|            0|            0|  0.00%|        """return a SELECT of this :class:`.FromClause`.
   306|         0|            0|            0|  0.00%|
   307|         0|            0|            0|  0.00%|        .. seealso::
   308|         0|            0|            0|  0.00%|
   309|         0|            0|            0|  0.00%|            :func:`~.sql.expression.select` - general purpose
   310|         0|            0|            0|  0.00%|            method which allows for arbitrary column lists.
   311|         0|            0|            0|  0.00%|
   312|         0|            0|            0|  0.00%|        """
   313|         0|            0|            0|  0.00%|
   314|         0|            0|            0|  0.00%|        return Select([self], whereclause, **params)
   315|         0|            0|            0|  0.00%|
   316|         1|  5.00679e-06|  5.00679e-06|  0.00%|    def join(self, right, onclause=None, isouter=False):

pprofile executing wrong Python file

My aim with pprofile was to perform statistic profiling of the execution of generate_imitation_data.py, excluding all files but two: imitation_generation/generation.py and imitation_generation/tutor.py. I ran the command:

pprofile -o profile_out.txt --statistic 0.01 --exclude *.py --include imitation_generation/generation.py --include imitation_generation/tutor.py generate_imitation_data.py

However, executing this command executes a different python file in my directory: preprocess_data.py.

Errors with Python 3.3

I'm using Ubuntu 13.04 64-bit, with pprofile installed from pip into a virtualenv.

The following happens independently of the sampling rate I pass to pprofile.

pprofile --statistic .1 memtest.py                                                                  
Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib/python3.3/threading.py", line 639, in _bootstrap_inner
    self.run()
  File "/home/ale/Programs/my-python3-env/lib/python3.3/site-packages/pprofile.py", line 620, in run
    test = self.ident.__cmp__
AttributeError: 'int' object has no attribute '__cmp__'

Command line: ['memtest.py']
Total duration: 0s

And if I use deterministic profiling, it seems to work but outputs the following at the end.

pprofile memtest.py
Traceback (most recent call last):
  File "/home/ale/Programs/my-python3-env/bin/pprofile", line 9, in <module>
    load_entry_point('pprofile==1.6.1', 'console_scripts', 'pprofile')()
  File "/home/ale/Programs/my-python3-env/lib/python3.3/site-packages/pprofile.py", line 731, in main
    commandline=repr(args),
  File "/home/ale/Programs/my-python3-env/lib/python3.3/site-packages/pprofile.py", line 290, in annotate
    call_list_by_line):
  File "/home/ale/Programs/my-python3-env/lib/python3.3/site-packages/pprofile.py", line 187, in _iterFile
    last_call_line = max(call_list_by_line)
ValueError: max() arg is an empty sequence

Potential UI improvements for the statistical profiler

I find the statistical profiler to be rather useful for line profiling, but I think there are some ways in which the UI can be improved for the statistical profiler:

  1. Since precise timing information is not available, the result output by print_stat would not show any percentage information. I think it would be more helpful if print_stat prints the percentage of hits when timing information is not available.
  2. In examples in the documentation, irrelevant lines are replaced with "[...]", which makes it easier to understand the examples. I think it might actually be possible to do this programmatically by detecting consecutive lines with hit counts below a threshold. This will help especially since threading.py will always appear in the results but is usually irrelevant.

If people agree with these ideas, maybe I can find some time to implement them and create a pull request.

Limit output to files in folder

I'd love to have the ability to limit the output of pprofile to the files of my code(I don't mind where libraries spend their time). From a short look at your code it seems to be almost implemented.

Return stats as a string

Would be nice if there was a way to get the stats as a string, as I want them but I don't want them on stdout. Instead, I'm going to have to write them to a file, then read that file contents. Weird.

Is it possible to implement ipython magics like in `line_profile`?

Hello,

thanks for pprofile! I don't know much about internals of iPython or pprofile, but I've noticed, that line_profiler can be loaded within a iPython notebook via %load_ext line_profiler and subsequently being used in iPython. My question: is this possible for pprofile too and/or it's implementation planned?

Details on how line_profile does it, can be seen here:
https://github.com/rkern/line_profiler/blob/master/line_profiler.py

Regards, Markus

`frame.f_lineno` can be none

frame.f_lineno can be None in limited circumstances in Python 3.10 as documented here. This can cause an issue in the Statistical profiler, as the line max(self.line_dict) can error out in this case here.

Trying to install older version of pprofile

Hi
I can install the current version of pprofile just fine. I am trying to install v2.0.2 (for uni they are very specific about the version) and a few of us are having issues

I have an M1 MacBook Air and when trying to install a previous version of pprofile in miniconda I'm running into this error:

(mt) bin: $ pip install pprofile==2.0.2
Collecting pprofile==2.0.2
  Using cached pprofile-2.0.2.tar.gz (35 kB)
    ERROR: Command errored out with exit status 1:
     command: /Users/dfg/miniconda3/envs/mt/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/2y/xs2qf7ms04xbg9lyv_7s49c00000gn/T/pip-install-fnwm_ln5/pprofile_138821cbfeaf40d98bf1716762b7216d/setup.py'"'"'; __file__='"'"'/private/var/folders/2y/xs2qf7ms04xbg9lyv_7s49c00000gn/T/pip-install-fnwm_ln5/pprofile_138821cbfeaf40d98bf1716762b7216d/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/2y/xs2qf7ms04xbg9lyv_7s49c00000gn/T/pip-pip-egg-info-drtrq3xd
         cwd: /private/var/folders/2y/xs2qf7ms04xbg9lyv_7s49c00000gn/T/pip-install-fnwm_ln5/pprofile_138821cbfeaf40d98bf1716762b7216d/
    Complete output (1 lines):
    error in pprofile setup command: use_2to3 is invalid.
    ----------------------------------------
WARNING: Discarding https://files.pythonhosted.org/packages/35/d9/360f4483f735cbd4f1ac7316f3bdbee06b5872355963b913f1a53871ac72/pprofile-2.0.2.tar.gz#sha256=3469102f462f9fc2d889970afcf73d89c0d89a36c49a4c262c3edc302b4a22da (from https://pypi.org/simple/pprofile/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement pprofile==2.0.2 (from versions: 1.0, 1.0.1, 1.1, 1.2, 1.2.1, 1.3, 1.4, 1.4.1, 1.5, 1.6, 1.6.1, 1.6.2, 1.7, 1.7.1, 1.7.2, 1.7.3, 1.8, 1.8.1, 1.8.2, 1.8.3, 1.9, 1.9.1, 1.9.2, 1.10.0, 1.10.1, 1.11.0, 2.0.0, 2.0.1, 2.0.2, 2.0.3, 2.0.4, 2.0.5, 2.1.0)
ERROR: No matching distribution found for pprofile==2.0.2

I can install the latest version just fine. Would appreciate any help resolving this issue!

UnicodeEncode error when source contains unicode characters

After adding the tenacity-module (6.2.0, from pip) to a python(2.7, sadly) project, I'm getting the following error when calling profiler.annotate(_fh):

  File "/usr/lib/python2.7/site-packages/pprofile/__init__.py", line 676, in annotate
    }, file=out)
UnicodeEncodeError: 'ascii' codec can't encode character u'\xc9' in position 71: ordinal not in range(128)

It would seem that tenacity/__init__.py:3 which contains \xc3\x89 causes this, and I suspect this is a sign of a more general issue regarding code-as-utf8.

Running pprofile through different conda environments

Hi there, thanks for pprofile!

Today, in an attemp to speed up some nasty linear algebra calculations I have under the hood of my scripts, I created 4 different Conda environments and, in each, I set python to use a different linear algebra engine (BLAS, OpenBLAS, ATLAS and Intel's Accelerated Python, one in each). Timing the code execution without pprofile, I can see that OpenBLAS and Intel's gave me dramatic speed-ups.

Now, when running pprofile from within each environment (i.e. from the terminal after activating each conda environment), I saw no difference at all. I found that to be suspicious, and started wondering that perhaps pprofile was disregarding conda environments. To test it further, I changed the main scrip to import a Python library that I knew was available only in 2 out of the 4 environments - hence, if pprofile was using the Python from each conda environment, Python sricpt execution should break in those 2 environments due to me trying to import in them libraries that are not available. But that never happened: code execution went on normally in all 4 environments, meaning that pprofile is indeed disregarding currently active Conda environment's Python.

Since when one does python file.py in a terminal with Conda activated, the actual Python version invoked is the one corresponding to the current Conda environment (or to the main system Python, in case Conda is not initialized), my impression is that pprofile does call the full path of the main system Python installation - instead of just executing a python command.

I wonder if there could be a way to adapt pprofile such that, if the user desires (e.g. via a pprofile option parameter), it would then use the Python version being currently available in the terminal by the python terminal command? That is, a way such that pprofile would inherit the Python it uses to executed the passed file.py script from current Conda environment?

This would certainly make it so much easier to compare the profiling over different low level Python frameworks.

Thanks again!

use_2to3 is no longer supported in setuptools

Hi, the use_2to3 flag in setup.py is no longer supported as of setuptools 58, see https://setuptools.pypa.io/en/latest/history.html#v58-0-0

Currently you end up with:

  Downloading pprofile-2.0.5.tar.gz (54 kB)
    ERROR: Command errored out with exit status 1:
     command: /srv/paws/bin/python3 -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-02vhpi04/pprofile_e92ad0a1902549b2a4eb132f0c6d8a90/setup.py'"'"'; __file__='"'"'/tmp/pip-install-02vhpi04/pprofile_e92ad0a1902549b2a4eb132f0c6d8a90/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /tmp/pip-pip-egg-info-i6nzbze7
         cwd: /tmp/pip-install-02vhpi04/pprofile_e92ad0a1902549b2a4eb132f0c6d8a90/
    Complete output (1 lines):
    error in pprofile setup command: use_2to3 is invalid.

No cStringIO on Python 3

Appears that zpprofile is importing cStringIO, which does not exist as named on Python 3.

import: 'zpprofile'
Traceback (most recent call last):
  File "/home/conda/feedstock_root/build_artifacts/pprofile_1531119090784/test_tmp/run_test.py", line 5, in <module>
    import zpprofile
  File "/home/conda/feedstock_root/build_artifacts/pprofile_1531119090784/_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/lib/python3.6/site-packages/zpprofile.py", line 69, in <module>
    from cStringIO import StringIO
ModuleNotFoundError: No module named 'cStringIO'

ref: https://circleci.com/gh/conda-forge/pprofile-feedstock/10
ref: https://github.com/vpelletier/pprofile/blob/1.11.0/zpprofile.py#L69
ref: https://stackoverflow.com/a/18284900

statistical profiler is broken?

looks like a bug in statistical profiler:

zatv@vostro-laptop:~/qqq$ pprofile --statistic 0.001 --out cachegrind.out.genData genData.py
Exception in thread Thread-1:
Traceback (most recent call last):
File "/usr/lib/python2.7/threading.py", line 810, in __bootstrap_inner
self.run()
File "/usr/local/lib/python2.7/dist-packages/pprofile.py", line 739, in run
sample(frame)
File "/usr/local/lib/python2.7/dist-packages/pprofile.py", line 664, in sample
called_timing, called_code, 0)
TypeError: call() takes exactly 7 arguments (6 given)

Versioneer 0.18 incompatible with Python 3.12

Versioneer 0.18 uses configparser.SafeConfigParser(), which was removed in Python 3.12 (python/cpython#92503)

I have tentatively upgraded the vendored file to versioneer==0.19, which allows me to install on 3.12 without issue, however this does also appear to break support for Python 2, so it may not be an ideal solution for you.

> pip install --no-cache --force-reinstall pprofile

Collecting pprofile
  Downloading pprofile-2.1.0.tar.gz (56 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 56.7/56.7 kB 596.3 kB/s eta 0:00:00
  Preparing metadata (setup.py) ... error
  error: subprocess-exited-with-error

  × python setup.py egg_info did not run successfully.
  │ exit code: 1
  ╰─> [18 lines of output]
      C:\Users\Daniel\AppData\Local\Temp\pip-install-5iyuhadx\pprofile_13a59f164ac44042a04c5f8208ddd6d4\versioneer.py:421: SyntaxWarning: invalid escape sequence '\s'
        LONG_VERSION_PY['git'] = '''
      Traceback (most recent call last):
        File "<string>", line 2, in <module>
        File "<pip-setuptools-caller>", line 34, in <module>
        File "C:\Users\Daniel\AppData\Local\Temp\pip-install-5iyuhadx\pprofile_13a59f164ac44042a04c5f8208ddd6d4\setup.py", line 25, in <module>
          version=versioneer.get_version(),
                  ^^^^^^^^^^^^^^^^^^^^^^^^
        File "C:\Users\Daniel\AppData\Local\Temp\pip-install-5iyuhadx\pprofile_13a59f164ac44042a04c5f8208ddd6d4\versioneer.py", line 1480, in get_version
          return get_versions()["version"]
                 ^^^^^^^^^^^^^^
        File "C:\Users\Daniel\AppData\Local\Temp\pip-install-5iyuhadx\pprofile_13a59f164ac44042a04c5f8208ddd6d4\versioneer.py", line 1412, in get_versions
          cfg = get_config_from_root(root)
                ^^^^^^^^^^^^^^^^^^^^^^^^^^
        File "C:\Users\Daniel\AppData\Local\Temp\pip-install-5iyuhadx\pprofile_13a59f164ac44042a04c5f8208ddd6d4\versioneer.py", line 342, in get_config_from_root
          parser = configparser.SafeConfigParser()
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
      AttributeError: module 'configparser' has no attribute 'SafeConfigParser'. Did you mean: 'RawConfigParser'?
      [end of output]

  note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed

× Encountered error while generating package metadata.
╰─> See above for output.

note: This is an issue with the package mentioned above, not pip.
hint: See above for details.

two profiles gives error about missing stack attribute

When trying to run two profiles in one function, I always get this error on the 2nd one:

  File "/.../lib/python3.7/site-packages/pprofile.py", line 923, in _real_local_trace
    stack, callee_dict = self.stack
AttributeError: stack

Unexpected behavior with built-in iterators

Probably a silly remark, but we couldn't explain this behavior with my team.

When we profile a loop with a built-in iterator, e.g.:

for _ in range(10 ** 7):
    pass

The result is:

Line #|      Hits|         Time| Time per hit|      %|Source code
------+----------+-------------+-------------+-------+-----------
     1|         0|            0|            0|  0.00%|for _ in range(10 ** 7):
     2|        20|            0|            0|  0.00%|    pass

i.e. that line 1 does not have any hit, while a pass has all the hits, which seems quite surprising. Do you know if this is normal?

Can I ignore certain modules on the output?

On my software I have several modules that I built myself, and so I want to profile them, but I also use some standard modules (like serial, json, os, etc) that I don't want to profile. Those std modules make the output enormous and difficult to analyze. Is there a way to remove them from the output?

License?

Can you please upload a LICENSE file to the project to clarify what license the code is released under? Thanks.

Exception ignored: NoneType has no attribute f_code

Hello,
thanks for pprofile! I got this exception while trying to run a script, and apparently this is being raised just before (or at the beginning) of my script. Also, it is not deterministic: the same script launched with the same inputs will not raise this exception every time.

Exception ignored in: <function WeakSet.__init__.<locals>._remove at 0x7f844517aea0>                                                                                                                                                                          
Traceback (most recent call last):                                                                                                                                                                                                                            
  File "/usr/local/lib/python3.7/_weakrefset.py", line 38, in _remove                                                                                                                                                                                         
    def _remove(item, selfref=ref(self)):                                                                                                                                                                                                                     
  File "/usr/local/lib/python3.7/site-packages/pprofile.py", line 916, in _real_global_trace                                   
    callee_dict[(frame.f_back.f_code, frame.f_code)].append(callee_entry)                                                                                                                                                                                     
AttributeError: 'NoneType' object has no attribute 'f_code'

I launched with pprofile --output /path/to/out.txt myscript.py, if it is of any interest.
Did I do something wrong or is this a bug? In the latter case, please let me know if I can do something more to help.

spent time has wrong line

I'm using dreampie to run pprofiler. If I define a function like this

def f():
    print 'a'
    [a for a in range(1000000)]
    print str(f)

Anr profile it, I get

Total duration: 5.007s
File: <pyshell#9>
File duration: 5.007s (100.00%)
Line #|      Hits|         Time| Time per hit|      %|Source code
------+----------+-------------+-------------+-------+-----------
     1|         2|            0|            0|  0.00%|def f():
     2|         2|            0|            0|  0.00%|    print 'a'
(call)|         1|            0|            0|  0.00%|# <pyshell#9>:1 f
(call)|         1|            0|            0|  0.00%|# C:\Program Files (x86)\DreamPie\data\subp-py2\dreampielib\subprocess\__init__.py:254 displayhook
(call)|         1|        5.007|        5.007|100.00%|# <pyshell#9>:1 f
     3|         2|            0|            0|  0.00%|    sleep(5)
     4|         1|        5.007|        5.007|100.00%|    print str(f)
     5|         1|            0|            0|  0.00%|
``

So 5s are assigned to print instead of sleep.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.