pythonprofilers / memory_profiler Goto Github PK
View Code? Open in Web Editor NEWMonitor Memory usage of Python code
Home Page: http://pypi.python.org/pypi/memory_profiler
License: Other
Monitor Memory usage of Python code
Home Page: http://pypi.python.org/pypi/memory_profiler
License: Other
this should work
(Pdb) import memory_profiler
(Pdb) memory_profiler.memory_usage()
*** UnboundLocalError: local variable 'num' referenced before assignment
I know, this is pretty much aesthetic.
Anyways, very useful little profiler. Allowed me to plot some great graphs about memory usage of some functions. Thank you!
psutil 3.0.0 was recently released. Attempting to profile code using this version of psutil results in an exception:
Traceback (most recent call last):
File "/usr/local/bin/mprof", line 472, in <module>
actions[get_action()]()
File "/usr/local/bin/mprof", line 220, in run_action
include_children=options.include_children, stream=f)
File "/usr/local/lib/python2.7/dist-packages/memory_profiler.py", line 243, in memory_usage
include_children=include_children)
File "/usr/local/lib/python2.7/dist-packages/memory_profiler.py", line 48, in _get_memory
mem_info = getattr(process, 'memory_info', process.get_memory_info)
AttributeError: 'Process' object has no attribute 'get_memory_info'
(memory_profiler does appear to work correctly with the previous version of psutil, 2.2.1)
It would be nice to have a function that outputs the memory usage of a given object
would it be possible to use the resource (http://docs.python.org/2/library/resource.html#resource-usage) library instead of psutils ?
I see the license of this module as Simplified BSD in README. But I couldn't find full license text in the source tree. For making it clearer, please add full license text in a COPYING or LICENSE file.
"i" argument is no longer used; "r" argument is parsed but not actually used for anything.
it seems strange the allocation of 2.05MB on line 6, it should be a small allocation and the 2.05 allocation should be on the next line (on a for loop we take the max). Probably lines are messed up
└─[$] python memory_profiler.py examples/example_loop.py
Line # Mem usage Increment Line Contents
==============================================
4 @profile
5 7.56 MB 0.00 MB def my_func_dict():
6 9.61 MB 2.05 MB a = {}
7 9.61 MB 0.00 MB for i in range(10000):
8 9.61 MB 0.00 MB a[i] = i + 1
9 9.61 MB 0.00 MB return
Hi,
I was trying to profile a function that executes a subprocess.check_call() which in turns calls a multithreaded program, which is the actual program I want to profile.
I am profiling with memory_usage, and it is always returning the same value, which is around 9MB, and I guess that is the memory used by the function that create the threads, but not by the threads all together.
The real function that I'm trying to benchmark would be this one. It is quite heavy to test, so I've written an ugly small script to reproduce the scenario. Here it is:
import sys
from multiprocessing import Pool
from memory_profiler import memory_usage
def test(n):
l = [i for i in range(n)]
def test_multip(n, np):
p = Pool(processes=np)
results = p.map(test, [n]*np)
if __name__=="__main__":
t = str(sys.argv[1])
n = int(sys.argv[2])
if t == "test":
print memory_usage((test, (n, )), max_usage=True, include_children=True)
else:
np = int(sys.argv[3])
print memory_usage((test_multip, (n, np, )), max_usage=True, include_children=True)
If you benchmark directly the test
function, memory_usage behaves as expected:
(master)guilc@milou-b:~/tests$ python multithread.py test 10000
[9.421875]
(master)guilc@milou-b:~/tests$ python multithread.py test 100000
[12.765625]
(master)guilc@milou-b:~/tests$ python multithread.py test 1000000
[47.67578125]
(master)guilc@milou-b:~/tests$ python multithread.py test 10000000
[396.0625]
(master)guilc@milou-b:~/tests$ python multithread.py test 100000000
[3879.73828125]
However, if you test the multithreaded version, this is what happens:
(master)guilc@milou-b:~/tests$ python multithread.py test_multip 10000 1
[9.5234375]
(master)guilc@milou-b:~/tests$ python multithread.py test_multip 100000 1
[9.51953125]
(master)guilc@milou-b:~/tests$ python multithread.py test_multip 1000000 1
[9.4296875]
(master)guilc@milou-b:~/tests$ python multithread.py test_multip 10000000 1
[9.31640625]
And the same if you try with more than one thread:
(master)guilc@milou-b:~/tests$ python multithread.py test_multip 10000000 1
[9.31640625]
(master)guilc@milou-b:~/tests$ python multithread.py test_multip 10000000 2
[9.3203125]
(master)guilc@milou-b:~/tests$ python multithread.py test_multip 10000000 3
[9.3203125]
(master)guilc@milou-b:~/tests$ python multithread.py test_multip 10000000 4
[9.3203125]
(master)guilc@milou-b:~/tests$ python multithread.py test_multip 10000000 5
[9.328125]
I hope that this is enough to understand and reproduce the problem. If you have any suggestion/idea of how to fix this, I can help on that.
Thanks a lot!
Hi!
The deocumentation is outdated in "Executing external scripts" section.
When it comes to timestamps there is a piece of code under "It is also possible to timestamp a portion of code using a context manager like this:" line. It is not working in the last version of memory_profiler.
Is there a possibility to still add custom timestamps and see plots like here?
Hi,
When i was reading your code, I found a small truncation issue
in _get_memory() (psutil version, i didn't check the other).
Here a gist to reproduce it:
https://gist.github.com/3665731
It is due to the integer division in python 2.x.
It seems that it was not intended when I saw the float(), so I report it.
Please have a look at my question at stack_overflow
continue Ian's work
ianozsvald@642d0bc
See comments in 167641e
Be able to stop the script with C-c, yet get output of what has been run sofar
In some cases, memory is not reclaimed because of memory profiler. Here's a small example exhibiting the problem:
Line # Mem usage Increment Line Contents
================================================
6 @profile
7 20.254 MB 0.000 MB def random_array(shape):
8 43.148 MB 22.895 MB arr1 = np.random.randn(*shape)
9 138.320 MB 95.172 MB arr = scipy.signal.detrend(arr1, axis=1)
10 138.320 MB 0.000 MB del arr1
11 138.320 MB 0.000 MB gc.collect()
12
13 138.320 MB 0.000 MB col_mean = np.mean(arr, axis=1)
14 138.320 MB 0.000 MB np.testing.assert_array_less(abs(col_mean), 1e-15)
15 138.320 MB 0.000 MB return arr
arr1
is a numpy array weighting 22.9MB. Detrending creates an array that has exactly the same size (arr
), but memory usage increases by 95MB, and does not go down, even after line 10.
I have checked that this is a side-effect of the memory_profiler module, by monitoring global memory usage, without using memory_profiler at all. Memory usage does raise to 96MB during execution of scipy.signal.detrend, but also does decrease just after it execution.
memory_profiler might be keeping a reference to the arr1
array somehow, but I wasn't able to find how even with guppy/heapy (which complains somehow when memory_profiler is loaded).
This seems to me like a tricky issue, but it is really important, since reported memory usage can be completely different than the value obtained without profiling.
EDIT: here is a snapshot of the memory usage graph.
running test/test_func.py
Line # Mem usage Increment Line Contents
==============================================
2 @profile
3 def test_1():
4 7.73 MB 0.00 MB # .. will be called twice ..
5 7.73 MB 0.00 MB a = 2.
6 7.73 MB 0.00 MB b = 3
7 7.61 MB -0.12 MB c = {}
8 7.80 MB 0.19 MB for i in range(1000):
9 7.80 MB 0.00 MB c[i] = 2
10 7.80 MB 0.00 MB c[0] = 2.
but correct results should be the same as calling the function once:
Line # Mem usage Increment Line Contents
==============================================
2 @profile
3 def test_1():
4 7.57 MB 0.00 MB # .. will be called twice ..
5 7.59 MB 0.02 MB a = 2.
6 7.59 MB 0.00 MB b = 3
7 7.59 MB 0.00 MB c = {}
8 7.73 MB 0.14 MB for i in range(1000):
9 7.73 MB 0.00 MB c[i] = 2
10 7.73 MB 0.00 MB c[0] = 2.
def disable(self):
self.last_time = {}
sys.settrace(None)
Instead this should execute sys.settrace(previous_fn)
to previous callback set before LineProfiler
was enabled. See how it's done in Nostrils
class from http://reminiscential.wordpress.com/2012/04/17/use-pythons-sys-settrace-for-fun-and-for-profit/ as an example.
Even more - we may want LineProfiler::trace_memory_usage
to run original trace callback (if it's code coverage for example and we don't want it skipped).
Here's a simple script using read_csv from pandas.io.parsers
:
from pandas.io.parsers import read_csv
@profile
def test_read_csv():
a = read_csv('dummy.txt')
return a
if __name__ == '__main__':
test_read_csv()
But when I call python -m memory_profiler profile_read_csv
, the following error appears:
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 162, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/runpy.py", line 72, in _run_code
exec code in run_globals
File "/Library/Python/2.7/site-packages/memory_profiler.py", line 272, in <module>
execfile(__file__, locals(), locals())
File "profile_read_csv.py", line 10, in <module>
test_read_csv()
File "/Library/Python/2.7/site-packages/memory_profiler.py", line 158, in f
result = func(*args, **kwds)
File "profile_read_csv.py", line 5, in test_read_csv
a = read_csv('dummy.txt')
File "/Library/Python/2.7/site-packages/pandas-0.7.3-py2.7-macosx-10.7-intel.egg/pandas/io/parsers.py", line 187, in read_csv
return _read(TextParser, filepath_or_buffer, kwds)
File "/Library/Python/2.7/site-packages/pandas-0.7.3-py2.7-macosx-10.7-intel.egg/pandas/io/parsers.py", line 153, in _read
parser = cls(f, **kwds)
TypeError: __init__() got an unexpected keyword argument 'kwds'
Anybody found this warning?
Can't even suppress by
-W ignore
I guess this is because psutil changed the function from get_memory_info to memory_info
I want to profile time and memory usage of class method.
When I try to use partial
from functools
I got this error:
File "/usr/lib/python2.7/site-packages/memory_profiler.py", line 126, in memory_usage
aspec = inspect.getargspec(f)
File "/usr/lib64/python2.7/inspect.py", line 815, in getargspec
raise TypeError('{!r} is not a Python function'.format(func))
TypeError: <functools.partial object at 0x252da48> is not a Python function
By the way exactly the same approach works fine with timeit
function.
When I try to use lambda
as was I got this error:
File "/usr/lib/python2.7/site-packages/memory_profiler.py", line 141, in memory_usage
ret = parent_conn.recv()
IOError: [Errno 4] Interrupted system call
How can I handle class methods with memory_profiler? Are there any (even dirty) ways?
I asked this question on SO: http://stackoverflow.com/questions/16593246/how-to-use-memory-profiler-python-module-with-class-methods
UPD: fixed broken link to SO
I've been playing with psutil, I'm attaching two images showing disk I/O and network I/O measurement (both hacky proofs).
The disk usage graph writes 10 files of 10MB each (with flushes), we can see some odd caching behaviour which maybe needs some more work?
The network graph reads a 1.6MB file from wikipedia 5 times.
Both charts exhibit a spike at the end of their 'with' block which I don't understand.
Is there interest in merging this code into the main project? Obviously this goes beyond the remit of a memory profiler! All I've done is changed a few lines with psutil in memory_profiler.py and fixed 1 line in mprof for the plotting.
Per the psutil documentation in http://pythonhosted.org/psutil/#psutil.Process.children, the iteration in https://github.com/fabianp/memory_profiler/blob/master/memory_profiler.py#L54 should actually be:
for p in process.children(recursive=True):
Otherwise you get an Exception:
AttributeError: 'Process' object has no attribute 'get_children'
In [2]: import numpy as np
In [3]: %memit np.zeros(1e2)
maximum of 1: 28.300781 MB per loop
In [4]: %memit np.zeros(1e2)
maximum of 1: 28.320312 MB per loop
In [5]: %memit np.zeros(1e2)
maximum of 1: 28.320312 MB per loop
In [6]: %memit np.zeros(1e4)
maximum of 1: 28.328125 MB per loop
In [7]: %memit np.zeros(1e7)
maximum of 1: 28.406250 MB per loop
In [8]: %memit np.zeros(1e7)
maximum of 1: 104.710938 MB per loop
These iter*
method have been removed and generator become default behaviour in Python 3.
% mprof plot
Traceback (most recent call last):
File "/usr/bin/mprof", line 467, in <module>
actions[get_action()]()
File "/usr/bin/mprof", line 436, in plot_action
mprofile = plot_file(filename, index=n, timestamps=timestamps)
File "/usr/bin/mprof", line 338, in plot_file
for values in ts.itervalues():
AttributeError: 'dict' object has no attribute 'itervalues'
The Python version is 3.3.3.
Hi. sometimes while testing it, I get negative memory. for instance
1119 97.7 MiB -30.0 MiB model.fit(X, y)
What does this mean?
mprof
can use a context manager to place a label. If the label contains a space e.g. "my label" then a ValueError
is raised as shown below. If the space is removed (e.g. "my_label") then mprof plot
displays the label.
It might be easier to make a note in the README stating that spaces aren't allowed if this disturbs your parsing code! Or catching the ValueError
and hinting that spaces aren't allowed (to give the user a hint).
$ mprof plot
Traceback (most recent call last):
File "/home/ian/workspace/virtualenvs/high_performance_python_orielly/shared_github/raw_code/ian/env/bin/mprof", line 467, in <module>
actions[get_action()]()
File "/home/ian/workspace/virtualenvs/high_performance_python_orielly/shared_github/raw_code/ian/env/bin/mprof", line 436, in plot_action
mprofile = plot_file(filename, index=n, timestamps=timestamps)
File "/home/ian/workspace/virtualenvs/high_performance_python_orielly/shared_github/raw_code/ian/env/bin/mprof", line 322, in plot_file
mprofile = read_mprofile_file(filename)
File "/home/ian/workspace/virtualenvs/high_performance_python_orielly/shared_github/raw_code/ian/env/bin/mprof", line 299, in read_mprofile_file
ts.append([float(start), float(end),
ValueError: could not convert string to float: list
memory_profiler doesn't print memory usage with python3 (no error messages either), just empty output.
sometimes there's no measurement for the last line (maybe when there's no return statement?)
4 @profile
5 7.58 MB 0.00 MB def my_func_dict():
6 9.62 MB 2.05 MB a = {}
7 9.62 MB 0.00 MB for i in range(10000):
8 a[i] = i + 1
it is used in memory_usage function like this:
if timeout is not None:
max_iter = int(timeout / interval)
elif isinstance(proc, int):
# external process and no timeout
max_iter = 1
else:
# for a Python function wait until it finishes
max_iter = float('inf') # <--------------------------
if isinstance(proc, (list, tuple)):
# ... (snip)
else:
# external process
if proc == -1:
proc = os.getpid()
if max_iter == -1:
max_iter = 1
for _ in range(max_iter): # <----------------
ret.append(_get_memory(proc))
time.sleep(interval)
return ret
range(float('inf')) is an error, and max_iter is not used for anything else here.
Tested on both OS X 10.11 and Debian GNU/Linux 7.
Collecting memory-profiler
Downloading memory_profiler-0.38.tar.gz
Complete output from command python setup.py egg_info:
Traceback (most recent call last):
File "", line 20, in
File "/private/tmp/pip-build-BV2ywJ/memory-profiler/setup.py", line 1, in
import memory_profiler
File "memory_profiler.py", line 863, in
magic_mprun = MemoryProfilerMagics().mprun.func
TypeError: init() takes exactly 2 arguments (1 given)
----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /private/tmp/pip-build-BV2ywJ/memory-profiler
When running python -m memory_profiler script_file.py --some-args
, memory_profile assumes that --some-args
is intended for it rather than for script_file.py
. This is easy to fix by adding the following single line immediately after creating the OptionParser:
parser.disable_interspersed_args()
Hi again,
I think that I may have found a possible race condition when counting the memory with psutil of a process using the include_children
option. The problem (I think) is in this piece of code in _get_memory
:
if include_children:
for p in process.get_children(recursive=True):
mem += p.get_memory_info()[0] / _TWO_20
The method get_children
returns a list that is used to iterate over and calculate the total memory. It may happen though that one of the child processes dies or finishes before the sum has finished, resulting on an error like this:
Reading configuration from '/pica/h1/guilc/repos/facs/tests/data/bin/fastq_screen.conf'
Using 1 threads for searches
Adding database phiX
Processing /pica/h1/guilc/repos/facs/tests/data/synthetic_fastq/simngs_phiX_100.fastq
Output file /pica/h1/guilc/repos/facs/tests/data/tmp/simngs_phiX_100_screen.txt already exists - skipping
Processing complete
Process MemTimer-2:
Traceback (most recent call last):
File "/sw/comp/python/2.7_kalkyl/lib/python2.7/multiprocessing/process.py", line 232, in _bootstrap
self.run()
File "/pica/h1/guilc/.virtualenvs/facs/lib/python2.7/site-packages/memory_profiler.py", line 124, in run
include_children=self.include_children)
File "/pica/h1/guilc/.virtualenvs/facs/lib/python2.7/site-packages/memory_profiler.py", line 52, in _get_memory
mem += p.get_memory_info()[0] / _TWO_20
File "/pica/h1/guilc/.virtualenvs/facs/lib/python2.7/site-packages/psutil/__init__.py", line 758, in get_memory_info
return self._platform_impl.get_memory_info()
File "/pica/h1/guilc/.virtualenvs/facs/lib/python2.7/site-packages/psutil/_pslinux.py", line 470, in wrapper
raise NoSuchProcess(self.pid, self._process_name)
NoSuchProcess: process no longer exists (pid=17442)
It happens randomly, and can be solved encapsulating the sum on a try except statement:
if include_children:
for p in process.get_children(recursive=True):
try:
mem += p.get_memory_info()[0] / _TWO_20
except NoSuchProcess:
pass
I'm not sure that this is the best solution though... any comments/ideas? @fabianp @brainstorm
Thanks!
does not correctly pass arguments into executed script
I have an issue with making the plot from a mprof
run legible. This is how mine looks:
I'd like to be able to stretch/zoom the plot - make it much larger, so the function markers don't overlap.
I have tried changing matplotlib's savefig.dpi
and figure.figsize
values, and both result in the graph being scaled, rather than the canvas being larger and the line/function markers becoming thinner and separated, and the text smaller.
I tried a really wide figure using these settings in my matplotlibrc
:
figure.figsize : 200, 10 # figure size in inches
savefig.dpi : 100
but it still plotted at 1400x600.
Do you know a way to make this possible?
It should be possible to profile a chunk of code by using the with statement as delimiter
sample script:
@profile
def f():
import numpy as np
print "about to allocate"
a = np.ones(1e8)
print "done"
f()
I've done "mprof run --python " and post that I am trying to plot the graph("mprof plot"). But I don't see any graph being plotted.
vikas@host:/home/vikas/memory_profiler-0.32$ ./mprof run --python ../asl
mprof: Sampling memory every 0.1s
running as a Python program...
vikas@host:/home/vikas/memory_profiler-0.32$ cat mprofile_20150224005550.dat
CMDLINE python ../asl
MEM 1.316406 1424768150.5671
MEM 6.539062 1424768150.6675
MEM 8.812500 1424768150.7678
MEM 8.812500 1424768150.8681
MEM 8.812500 1424768150.9684
When invoking via command line with -m option, memory_profiler does not remove itself from sys.argv, so it messes up the profiled program's argument parsing:
$ cat mp.py
import sys
print sys.argv
$ python mp.py --foo
['mp.py', '--foo']
$ python -m memory_profiler mp.py --foo
['/home/jneely/dev/env/betl/lib/python2.6/site-packages/memory_profiler.py', 'mp.py', '--foo']
By contrast:
$ python -m pdb mp.py --foo
> /home/jneely/tmp/mp.py(1)<module>()
-> import sys
(Pdb) c
['mp.py', '--foo']
I really like the output of this memory profiler.
However, I think that people who are interested in memory efficiency may also be interested in execution time. And in requiring the decorator (or some other memory profiling mechanism), we end up with code that requires modification each time we want to profile for both compute and memory. Finally, the memory profiler slows down execution time, so for a final product ship, we must remove the memory profiling mechanism. Making updates to code for just capturing profiling data is cumbersome and definitely not efficient.
Ideally, the memory profiler would require no updates to the code to perform and would function on the code in a similar manner as the cProfile module.
memory_profiler does not seem to correctly catch the memory allocation for the first line of a function.
Here's two versions of a script (I was trying to illustrate the failings of sys.getsizeof), differing only in an initial print 'hello world'
statement:
$ cat ~/tmp/lists.py
import random
import sys
import numpy as np
@profile
def test_random_mem_usage():
c = [np.zeros(50000) for x in range(1000)]
print sys.getsizeof(c)
print sum(map(len, c))
print sum(map(sys.getsizeof, c))
if __name__ == '__main__':
test_random_mem_usage()
and
$ cat ~/tmp/lists2.py
import random
import sys
import numpy as np
@profile
def test_random_mem_usage():
print 'hello world'
c = [np.zeros(50000) for x in range(1000)]
print sys.getsizeof(c)
print sum(map(len, c))
print sum(map(sys.getsizeof, c))
if __name__ == '__main__':
test_random_mem_usage()
Compare the outputs:
$ python -m memory_profiler ~/tmp/lists.py
9032
50000000
80000
Line # Mem usage Increment Line Contents
==============================================
5 @profile
6 399.39 MB 0.00 MB def test_random_mem_usage():
7 399.39 MB 0.00 MB c = [np.zeros(50000) for x in range(1000)]
8 399.39 MB 0.00 MB print sys.getsizeof(c)
9 399.40 MB 0.01 MB print sum(map(len, c))
10 399.40 MB 0.00 MB print sum(map(sys.getsizeof, c))
$ python -m memory_profiler ~/tmp/lists2.py
hello world
9032
50000000
80000
Line # Mem usage Increment Line Contents
==============================================
5 @profile
6 16.43 MB 0.00 MB def test_random_mem_usage():
7 16.43 MB 0.00 MB print 'hello world'
8 399.39 MB 382.96 MB c = [np.zeros(50000) for x in range(1000)]
9 399.39 MB 0.00 MB print sys.getsizeof(c)
10 399.40 MB 0.01 MB print sum(map(len, c))
11 399.40 MB 0.00 MB print sum(map(sys.getsizeof, c))
It appears that memory_profiler does not produce any output at all if asked to profile a generator function (I couldn't find this documented anywhere, so I assume it's a bug)..
I took the simple example code snippet from https://pypi.python.org/pypi/memory_profiler and saved it as example.py. I then took a copy and modified it as follows and saved it to example2.py:
@profile
def my_func():
a = [1] * (10 ** 6)
b = [2] * (2 * 10 ** 7)
del b
yield a
if __name__ == '__main__':
next(my_func())
(i.e. replaced the "return" with a "yield" instead). I got the following results:
$ python3 -m memory_profiler example.py
Filename: example.py
Line # Mem usage Increment Line Contents
================================================
1 @profile
2 8.969 MB 0.000 MB def my_func():
3 16.699 MB 7.730 MB a = [1] * (10 ** 6)
4 169.324 MB 152.625 MB b = [2] * (2 * 10 ** 7)
5 16.738 MB -152.586 MB del b
6 16.738 MB 0.000 MB return a
$ python3 -m memory_profiler example2.py
$
After installing 64bit python 2.7 memory_profiler, there is a file 'mprof' in /Scripts but this is not runnable on windows.
I ran the following test code with the command:
mprof run -T 0.001 mprof_example.py
import time
def test1():
n = 10000
a = [1] * n
time.sleep(1)
return a
def test2():
n = 100000
b = [1] * n
time.sleep(1)
return b
if __name__ == "__main__":
test1()
test2()
I got the following output file:
CMDLINE /usr/local/local/python-2.7.5/bin/python2.7 mprof_example.py
MEM 5.476562 1438698997.6944
MEM 7.613281 1438698997.7504
MEM 7.613281 1438698997.8072
MEM 7.613281 1438698997.8678
MEM 7.613281 1438698997.9238
MEM 7.613281 1438698997.9789
MEM 7.613281 1438698998.0327
MEM 7.613281 1438698998.0876
MEM 7.613281 1438698998.1430
MEM 7.613281 1438698998.1976
MEM 7.613281 1438698998.2512
MEM 7.613281 1438698998.3066
MEM 7.613281 1438698998.3623
MEM 7.613281 1438698998.4171
MEM 7.613281 1438698998.4711
MEM 7.613281 1438698998.5262
MEM 7.613281 1438698998.5816
MEM 7.613281 1438698998.6397
MEM 7.613281 1438698998.6970
MEM 8.378906 1438698998.7522
MEM 8.378906 1438698998.8076
MEM 8.378906 1438698998.8626
MEM 8.378906 1438698998.9165
MEM 8.378906 1438698998.9765
MEM 8.378906 1438698999.0322
MEM 8.378906 1438698999.0871
MEM 8.378906 1438698999.1414
MEM 8.378906 1438698999.1967
MEM 8.378906 1438698999.2529
MEM 8.378906 1438698999.3094
MEM 8.378906 1438698999.3658
MEM 8.378906 1438698999.4282
MEM 8.378906 1438698999.4831
MEM 8.378906 1438698999.5372
MEM 8.378906 1438698999.5924
MEM 8.378906 1438698999.6475
MEM 8.378906 1438698999.7022
MEM 0.000000 1438698999.7563
I tried plotting the result and saw the following error message:
mprof plot mprofile_mprofex.dat
/usr/lib/pymodules/python2.7/matplotlib/axes.py:4601: UserWarning: No labeled objects found. Use label='...' kwarg on individual plots.
warnings.warn("No labeled objects found. "
Traceback (most recent call last):
File "/home/jey/memory_profiler-0.33/mprof", line 490, in
actionsget_action()
File "/home/jey/memory_profiler-0.33/mprof", line 470, in plot_action
leg.get_frame().set_alpha(0.5)
AttributeError: 'NoneType' object has no attribute 'get_frame'
Here is an example script:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import math as m
@profile
def f():
o = m.sqrt(2013)
return o
print(f())
And here output with Python 2:
~$ python2 -m memory_profiler ./tmpr.py
44.8664685483
Filename: ./tmpr.py
Line # Mem usage Increment Line Contents
================================================
7 @profile
8 9.668 MB 0.000 MB def f():
9 9.676 MB 0.008 MB o = m.sqrt(2013)
10 9.676 MB 0.000 MB return o
And here with Python 3:
~$ python3 -m memory_profiler ./tmpr.py
Traceback (most recent call last):
File "/usr/lib64/python3.2/runpy.py", line 161, in _run_module_as_main
"__main__", fname, loader, pkg_name)
File "/usr/lib64/python3.2/runpy.py", line 74, in _run_code
exec(code, run_globals)
File "/usr/lib64/python3.2/site-packages/memory_profiler.py", line 615, in <module>
ns, copy(globals()))
File "./tmpr.py", line 13, in <module>
print(f())
File "/usr/lib64/python3.2/site-packages/memory_profiler.py", line 576, in wrapper
val = prof(func)(*args, **kwargs)
File "/usr/lib64/python3.2/site-packages/memory_profiler.py", line 229, in f
result = func(*args, **kwds)
File "./tmpr.py", line 9, in f
o = m.sqrt(2013)
NameError: global name 'm' is not defined
hi fabianp,
I'm reusing some code of memory_profiler
in one of my projects. (https://github.com/peter1000/SpeedIT no code uploaded yet)
As I looked through I have one question:
what is the use of: self.last_time in this code part. It doesn.t seem to be called any were
https://github.com/fabianp/memory_profiler/blob/master/memory_profiler.py#L527
def disable(self):
self.last_time = {}
sys.settrace(self._original_trace_function)
just a question - Thanks peter1000
Might be possible with sys.settrace and triggering the call event on that function, and then monkey-patching it just before it is called
Using a unicode string inside a timestamp context manager:
with profile.timestamp(u"Adding_jobs"):
causes:
TypeError: __name__ must be set to a string object
This can be caused with a normal looking Python string if you use from __future__ import unicode_literals
(as I originally did) and the error message isn't as informative as it could be. The solution, if using the __future__
import is to force a binary sequence:
with profile.timestamp(b"Adding_jobs"):
I mention this more just to help others spot the problem if they hit this error message.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.