Coder Social home page Coder Social logo

python / pyperformance Goto Github PK

View Code? Open in Web Editor NEW
829.0 62.0 164.0 4.84 MB

Python Performance Benchmark Suite

Home Page: http://pyperformance.readthedocs.io/

License: MIT License

Python 82.98% HTML 15.34% Shell 1.69%
python performance benchmark

pyperformance's Introduction

The Python Benchmark Suite

Latest pyperformance release on the Python Cheeseshop (PyPI) Build status of pyperformance on GitHub Actions

The pyperformance project is intended to be an authoritative source of benchmarks for all Python implementations. The focus is on real-world benchmarks, rather than synthetic benchmarks, using whole applications when possible.

pyperformance is not tuned for PyPy yet: use the PyPy benchmarks project instead to measure PyPy performances.

pyperformance is distributed under the MIT license.

pyperformance's People

Contributors

alexwaygood avatar ambv avatar apparebit avatar arielin3 avatar brandtbucher avatar brettcannon avatar carljm avatar corona10 avatar dependabot[bot] avatar druckermanly avatar eendebakpt avatar ericsnowcurrently avatar fidget-spinner avatar gvanrossum avatar hauntsaninja avatar hugovk avatar idealrealism avatar itamaro avatar kmod avatar kumaraditya303 avatar lazka avatar mdboom avatar methane avatar oraluben avatar pablogsal avatar tobymao avatar tomerv avatar vincentfretin avatar vstinner avatar wolframalph avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyperformance's Issues

speed.python.org not run since May 2

I don't known if this is an issue here, but it may have something to do with the default branch of CPython's got repo being renamed to "main". There seem to be some remaining references to "master" here.

Multiple pyperformance tests are incompatible with Python 3.9

Changes in Python 3.9 break html5lib, django_template, and tornado_http tests with this error:

File "/usr/local/src/pyperf-tot-no-venv/lib/python3.9/site-packages/html5lib/_tokenizer.py", line 16, in <module> from ._trie import Trie File "/usr/local/src/pyperf-tot-no-venv/lib/python3.9/site-packages/html5lib/_trie/__init__.py", line 3, in <module> from .py import Trie as PyTrie File "/usr/local/src/pyperf-tot-no-venv/lib/python3.9/site-packages/html5lib/_trie/py.py", line 6, in <module> from ._base import Trie as ABCTrie File "/usr/local/src/pyperf-tot-no-venv/lib/python3.9/site-packages/html5lib/_trie/_base.py", line 3, in <module> from collections import Mapping ImportError: cannot import name 'Mapping' from 'collections' (/root/py3-tot-no/lib/python3.9/collections/__init__.py) ERROR: Benchmark html5lib failed: Benchmark died

How to reproduce the issue

I've been building Python 3 from source and encountered it through the standard build process.

  1. Build Python from source. I produced with git commit befa032d8869e0fab4732d910f3887642879d644 from cpython GitHub.

  2. Run pyperformance with one of the following parameter sets:

/py3buildpath/bin/pyperformance run --python=/py3buildpath/bin/python3 --venv venvpath -b html5lib -o output.json

/py3buildpath/bin/pyperformance run --python=/py3buildpath/bin/python3 --venv venvpath -b django_template -o output.json

/py3buildpath/bin/pyperformance run --python=/py3buildpath/bin/python3 --venv venvpath -b tornado_http -o output.json

  1. Error message is shown.

What you expected to happen

I expected these pyperformance benchmark to complete successfully and produce a result (without error).

What actually happens

Benchmark dies trying to import from collections.

I believe with Python 3.9 these packages need to be directly imported from collections.abc.

See this commit: python/cpython@ef092fe

Here's a patch that was backported to the Python 3 vendored library: pypa/pip@ef7ca14#diff-2496ad1eedee846e323ed2916d6c2d24

This library probably needs an official release so that the vendored patch set in Python 3 doesn't become bloated.

genshi benchmark fails on Python 3.8.0a4+

  File "lib/python3.8/site-packages/genshi/template/text.py", line 137, in __init__
    Template.__init__(self, source, filepath=filepath, filename=filename,
  File "lib/python3.8/site-packages/genshi/template/base.py", line 418, in __init__
    self._stream = self._parse(source, encoding)
  File "lib/python3.8/site-packages/genshi/template/text.py", line 181, in _parse
    for kind, data, pos in interpolate(text, self.filepath, lineno,
  File "lib/python3.8/site-packages/genshi/template/interpolation.py", line 77, in interpolate
    expr = Expression(chunk.strip(), pos[0], pos[1],
  File "lib/python3.8/site-packages/genshi/template/eval.py", line 93, in __init__
    self.code = _compile(node, self.source, mode=self.mode,
  File "lib/python3.8/site-packages/genshi/template/eval.py", line 470, in _compile
    return build_code_chunk(code, filename, name, lineno)
  File "lib/python3.8/site-packages/genshi/compat.py", line 94, in build_code_chunk
    return CodeType(0, code.co_nlocals, code.co_kwonlyargcount,
an integer is required (got type bytes)

genshi benchmark fails on Python 3.8

performance 0.7.0 fails on the master branch of Python (future Python 3.8) because genshi uses _ast.Str type which is gone from Python 3.8.

visit_Name() at genshi/template/eval.py:616:

strarg = _new(_ast.Str, node.id)

performance 0.7.0 uses Genshi 0.7.1. I see two options:

  • Disable/remove the benchmark
  • Fix Genshi, wait for a new release, upgrade Genshi in performance

Genshi project homepage: https://genshi.edgewall.org/

cc @serhiy-storchaka @methane

Full traceback:

2018-10-16 15:31:58,023: [10/47] genshi...
2018-10-16 15:31:58,024: INFO:root:Running `/home/haypo/bench_tmpdir/venv/bin/python -u /home/haypo/performance/performance/benchmarks/bm_genshi.py --verbose --output /tmp/tmpace4xllo`
2018-10-16 15:31:58,177: Traceback (most recent call last):
2018-10-16 15:31:58,177:   File "/home/haypo/performance/performance/benchmarks/bm_genshi.py", line 68, in <module>
2018-10-16 15:31:58,178:     runner.bench_time_func(name, bench_genshi, tmpl_cls, tmpl_str)
2018-10-16 15:31:58,178:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/perf/_runner.py", line 458, in bench_time_func
2018-10-16 15:31:58,178:     return self._main(task)
2018-10-16 15:31:58,178:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/perf/_runner.py", line 423, in _main
2018-10-16 15:31:58,178:     bench = self._worker(task)
2018-10-16 15:31:58,178:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/perf/_runner.py", line 397, in _worker
2018-10-16 15:31:58,179:     run = task.create_run()
2018-10-16 15:31:58,179:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/perf/_worker.py", line 293, in create_run
2018-10-16 15:31:58,179:     self.compute()
2018-10-16 15:31:58,179:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/perf/_worker.py", line 331, in compute
2018-10-16 15:31:58,179:     WorkerTask.compute(self)
2018-10-16 15:31:58,179:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/perf/_worker.py", line 280, in compute
2018-10-16 15:31:58,180:     self.calibrate_loops()
2018-10-16 15:31:58,180:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/perf/_worker.py", line 243, in calibrate_loops
2018-10-16 15:31:58,180:     self._compute_values(self.warmups, nvalue,
2018-10-16 15:31:58,180:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/perf/_worker.py", line 76, in _compute_values
2018-10-16 15:31:58,180:     raw_value = self.task_func(self, self.loops)
2018-10-16 15:31:58,180:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/perf/_runner.py", line 454, in task_func
2018-10-16 15:31:58,181:     return time_func(loops, *args)
2018-10-16 15:31:58,181:   File "/home/haypo/performance/performance/benchmarks/bm_genshi.py", line 29, in bench_genshi
2018-10-16 15:31:58,181:     tmpl = tmpl_cls(tmpl_str)
2018-10-16 15:31:58,181:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/genshi/template/text.py", line 137, in __init__
2018-10-16 15:31:58,181:     Template.__init__(self, source, filepath=filepath, filename=filename,
2018-10-16 15:31:58,181:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/genshi/template/base.py", line 418, in __init__
2018-10-16 15:31:58,181:     self._stream = self._parse(source, encoding)
2018-10-16 15:31:58,182:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/genshi/template/text.py", line 181, in _parse
2018-10-16 15:31:58,182:     for kind, data, pos in interpolate(text, self.filepath, lineno,
2018-10-16 15:31:58,182:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/genshi/template/interpolation.py", line 77, in interpolate
2018-10-16 15:31:58,182:     expr = Expression(chunk.strip(), pos[0], pos[1],
2018-10-16 15:31:58,182:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/genshi/template/eval.py", line 93, in __init__
2018-10-16 15:31:58,182:     self.code = _compile(node, self.source, mode=self.mode,
2018-10-16 15:31:58,183:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/genshi/template/eval.py", line 451, in _compile
2018-10-16 15:31:58,183:     tree = xform().visit(node)
2018-10-16 15:31:58,183:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/genshi/template/astutil.py", line 794, in visit
2018-10-16 15:31:58,183:     return visitor(node)
2018-10-16 15:31:58,183:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/genshi/template/astutil.py", line 816, in _clone
2018-10-16 15:31:58,183:     value = self.visit(value)
2018-10-16 15:31:58,184:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/genshi/template/astutil.py", line 794, in visit
2018-10-16 15:31:58,184:     return visitor(node)
2018-10-16 15:31:58,184:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/genshi/template/eval.py", line 616, in visit_Name
2018-10-16 15:31:58,184:     strarg = _new(_ast.Str, node.id)
2018-10-16 15:31:58,184: AttributeError: module '_ast' has no attribute 'Str'

Semantic Difference Between Python 2 and Python 3

While looking for performance differences between CPython 2.7 and CPython 3.6, I noticed the same benchmark would exercise different parts of the interpreter.

pybench/Calls.py (PythonMethodCalls), pybench/Lookups.py (SpecialClassAttribute, NormalClassAttribute, SpecialInstanceAttribute, and NormalInstanceAttribute):

These benchmarks are using the syntax class c: to create classes. This syntax creates old-styles classes in Python 2 and new-style classes in Python 3. Should we create separate tests for old-style classes (disabled in Python 3) and new-style classes?

pybench/Strings.py:

Due to the str changes in Python 3, it is hard to use these benchmarks to compare between Python 2 and Python 3. While we could use bytes in Python 3 (similar to how unicode is being used Unicode.py), not use if that is the correct approach. Because, at that point, it would more of an ascii string benchmark rather than a str.

Thanks!

Add a way to plug in custom benchmarks.

Currently you have to modify this repo if you want to run a custom benchmark. It would be nice to have a mechanism by which a custom benchmark could be plugged in externally.

(This isn't a priority.)

Django and Tornado benchmarks broken on Python 3.9

See also issue #74.

2019-12-17 01:04:41,423: [42/47] tornado_http...
2019-12-17 01:04:41,424: INFO:root:Running `/home/haypo/bench_tmpdir/venv/bin/python -u /home/haypo/pyperformance/pyperformance/benchmarks/bm_tornado_http.py --verbose --output /tmp/tmp8_0ujs9g`
2019-12-17 01:04:41,537: Traceback (most recent call last):
2019-12-17 01:04:41,537:   File "/home/haypo/pyperformance/pyperformance/benchmarks/bm_tornado_http.py", line 15, in <module>
2019-12-17 01:04:41,537:     from tornado.httpclient import AsyncHTTPClient
2019-12-17 01:04:41,537:   File "/home/haypo/bench_tmpdir/venv/lib/python3.9/site-packages/tornado/httpclient.py", line 50, in <module>
2019-12-17 01:04:41,537:     from tornado import gen, httputil, stack_context
2019-12-17 01:04:41,537:   File "/home/haypo/bench_tmpdir/venv/lib/python3.9/site-packages/tornado/httputil.py", line 107, in <module>
2019-12-17 01:04:41,537:     class HTTPHeaders(collections.MutableMapping):
2019-12-17 01:04:41,537: AttributeError: module 'collections' has no attribute 'MutableMapping'

and

2019-12-17 00:43:47,774: [ 6/47] django_template...
2019-12-17 00:43:47,775: INFO:root:Running `/home/haypo/bench_tmpdir/venv/bin/python -u /home/haypo/pyperformance/pyperformance/benchmarks/bm_django_template.py --verbose --output /tmp/tmp32v4efa1`
2019-12-17 00:43:47,925: Traceback (most recent call last):
2019-12-17 00:43:47,925:   File "/home/haypo/pyperformance/pyperformance/benchmarks/bm_django_template.py", line 38, in <module>
2019-12-17 00:43:47,925:     django.setup()
2019-12-17 00:43:47,925:   File "/home/haypo/bench_tmpdir/venv/lib/python3.9/site-packages/django/__init__.py", line 18, in setup
2019-12-17 00:43:47,925:     from django.urls import set_script_prefix
2019-12-17 00:43:47,925:   File "/home/haypo/bench_tmpdir/venv/lib/python3.9/site-packages/django/urls/__init__.py", line 1, in <module>
2019-12-17 00:43:47,926:     from .base import (
2019-12-17 00:43:47,926:   File "/home/haypo/bench_tmpdir/venv/lib/python3.9/site-packages/django/urls/base.py", line 11, in <module>
2019-12-17 00:43:47,926:     from .exceptions import NoReverseMatch, Resolver404
2019-12-17 00:43:47,926:   File "/home/haypo/bench_tmpdir/venv/lib/python3.9/site-packages/django/urls/exceptions.py", line 3, in <module>
2019-12-17 00:43:47,926:     from django.http import Http404
2019-12-17 00:43:47,926:   File "/home/haypo/bench_tmpdir/venv/lib/python3.9/site-packages/django/http/__init__.py", line 5, in <module>
2019-12-17 00:43:47,926:     from django.http.response import (
2019-12-17 00:43:47,926:   File "/home/haypo/bench_tmpdir/venv/lib/python3.9/site-packages/django/http/response.py", line 13, in <module>
2019-12-17 00:43:47,926:     from django.core.serializers.json import DjangoJSONEncoder
2019-12-17 00:43:47,926:   File "/home/haypo/bench_tmpdir/venv/lib/python3.9/site-packages/django/core/serializers/__init__.py", line 23, in <module>
2019-12-17 00:43:47,926:     from django.core.serializers.base import SerializerDoesNotExist
2019-12-17 00:43:47,927:   File "/home/haypo/bench_tmpdir/venv/lib/python3.9/site-packages/django/core/serializers/base.py", line 4, in <module>
2019-12-17 00:43:47,927:     from django.db import models
2019-12-17 00:43:47,927:   File "/home/haypo/bench_tmpdir/venv/lib/python3.9/site-packages/django/db/models/__init__.py", line 5, in <module>
2019-12-17 00:43:47,927:     from django.db.models.deletion import (
2019-12-17 00:43:47,927:   File "/home/haypo/bench_tmpdir/venv/lib/python3.9/site-packages/django/db/models/deletion.py", line 5, in <module>
2019-12-17 00:43:47,927:     from django.db.models import signals, sql
2019-12-17 00:43:47,927:   File "/home/haypo/bench_tmpdir/venv/lib/python3.9/site-packages/django/db/models/sql/__init__.py", line 2, in <module>
2019-12-17 00:43:47,927:     from django.db.models.sql.query import *  # NOQA
2019-12-17 00:43:47,927:   File "/home/haypo/bench_tmpdir/venv/lib/python3.9/site-packages/django/db/models/sql/query.py", line 11, in <module>
2019-12-17 00:43:47,927:     from collections import Counter, Iterator, Mapping, OrderedDict
2019-12-17 00:43:47,927: ImportError: cannot import name 'Iterator' from 'collections' (/home/haypo/bench_tmpdir/prefix/lib/python3.9/collections/__init__.py)
2019-12-17 00:43:47,941: ERROR: Benchmark django_template failed: Benchmark died

Different number for values for python_startup and python_startup_no_site between CPython 2.7 and PyPy 2.7

The problem

I tried to compare the performance results between different Python versions and implementations. While the comparison between CPython 3.6 and CPython 2.7 works as expected, I get an exception when comparing the results obtained with CPython 2.7.13 and PyP 2.7.13.

Exact versions:

CPython:

 Python 2.7.13 |Continuum Analytics, Inc.| (default, Dec 20 2016, 23:05:08) 
 [GCC 4.2.1 Compatible Apple LLVM 6.0 (clang-600.0.57)] on darwin

PyPy:

 (Python 2.7.13 (c925e7381036, Jun 05 2017, 20:53:58) 
 [PyPy 5.8.0 with GCC 4.2.1 Compatible  Apple LLVM 5.1 (clang-503.0.40)]

How to reproduce the issue

Run from CPython 2 environment:

 python -m  performance run  -o py27.json

Run from PyPy environment:

pypy -m performance run -o pypy27.json

Compare:

pyperformance compare -O table py27.json pypy27.json 

What you expected to happen

I expected table like this:
+-------------------------+-----------+-------------+-----------------+-------------------------+
| Benchmark | py27.json | pypy27.json | Change | Significance |
+=========================+===========+=============+=================+=========================+
| 2to3 | 767 ms | 1.63 sec | 2.13x slower | Significant (t=-142.45) |
+-------------------------+-----------+-------------+-----------------+-------------------------+
| chaos | 215 ms | 5.58 ms | 38.62x faster | Significant (t=204.35) |
+-------------------------+-----------+-------------+-----------------+-------------------------+

What actually happens

I get this exception:

compare.py", line 212, in __init__
    raise RuntimeError("base and changed don't have "
 RuntimeError: base and changed don't have the same number of values

Note: The line number my have change do to my debug prints.

Cause

The number of values for the benchmarks python_startup and python_startup_no_site are different, i.e. 200 for CPython and 60 for PyPy (same numbers for both benchmarks)

My work around

I just skipped python_startup and python_startup_no_site with:

        if name in ('python_startup', 'python_startup_no_site'):
            continue

in compare.compare_results:

def compare_results(options):
   base_label, changed_label = get_labels(options.baseline_filename,
                                           options.changed_filename)

    base_suite = perf.BenchmarkSuite.load(options.baseline_filename)
    changed_suite = perf.BenchmarkSuite.load(options.changed_filename)

    results = []
    common = set(base_suite.get_benchmark_names()) & set(
        changed_suite.get_benchmark_names())
    for name in sorted(common):
        print(name)
        if name in ('python_startup', 'python_startup_no_site'):
            continue
        base_bench = base_suite.get_benchmark(name)
        changed_bench = changed_suite.get_benchmark(name)
        result = BenchmarkResult(base_bench, changed_bench)
        results.append(result)

Suggested better solution

Either:

  1. Allow command line argument to explicitly skip comparison of tests.
  2. Skip non-comparable tests automatically and just list them at the end. Make this optional via a command line switch.

Add a benchmark that heavily uses type annotations

This is a feature request.

Would it make sense to add a benchmark for code that heavily uses annotations, generics, and other typing features? I even have a suggestion for such benchmark: running mypy on itself (on its own code). The run currently takes around 10-20 sec. (depending on machine). If this is too long, other option would be to run mypy on one of typeshed stubs, this will take less than a second.

version 0.8.0 - tornado_http with --track-memory does not terminate

I am running pyperformance run -r -m with a self built CPython 3.7.0 on Ubuntu 18.04.2 LTS. The command stalls at tornado_http.

I believe this was introduced with the last version, as version 0.7.0 works without problem. Without --track-memory the problem does not occur either.

pyperformance doesn't work on the current Python 3.11 dev version: fail to install greenlet

$ ./bin/python3.11 -m pyperformance run -o ~/python/pgo_lto_pyperformance_macros.json -v 
(...)
    creating build/temp.linux-x86_64-3.11/src/greenlet
    gcc -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -fPIC -I/home/vstinner/python/main/install/venv/cpython3.11-527ffb5582f0/include -I/home/vstinner/python/main/install/include/python3.11 -c src/greenlet/greenlet.c -o build/temp.linux-x86_64-3.11/src/greenlet/greenlet.o
    src/greenlet/greenlet.c: In function โ€˜g_switchstackโ€™:
    src/greenlet/greenlet.c:508:44: error: โ€˜PyThreadStateโ€™ {aka โ€˜struct _tsโ€™} has no member named โ€˜recursion_depthโ€™; did you mean โ€˜recursion_limitโ€™?
      508 |         current->recursion_depth = tstate->recursion_depth;
          |                                            ^~~~~~~~~~~~~~~
          |                                            recursion_limit
    src/greenlet/greenlet.c:509:38: error: โ€˜PyThreadStateโ€™ {aka โ€˜struct _tsโ€™} has no member named โ€˜frameโ€™; did you mean โ€˜cframeโ€™?
      509 |         current->top_frame = tstate->frame;
          |                                      ^~~~~
          |                                      cframe
    src/greenlet/greenlet.c:544:17: error: โ€˜PyThreadStateโ€™ {aka โ€˜struct _tsโ€™} has no member named โ€˜recursion_depthโ€™; did you mean โ€˜recursion_limitโ€™?
      544 |         tstate->recursion_depth = target->recursion_depth;
          |                 ^~~~~~~~~~~~~~~
          |                 recursion_limit
    src/greenlet/greenlet.c:545:17: error: โ€˜PyThreadStateโ€™ {aka โ€˜struct _tsโ€™} has no member named โ€˜frameโ€™; did you mean โ€˜cframeโ€™?
      545 |         tstate->frame = target->top_frame;
          |                 ^~~~~
          |                 cframe
    src/greenlet/greenlet.c: In function โ€˜g_initialstubโ€™:
    src/greenlet/greenlet.c:821:50: error: โ€˜PyThreadStateโ€™ {aka โ€˜struct _tsโ€™} has no member named โ€˜recursion_depthโ€™; did you mean โ€˜recursion_limitโ€™?
      821 |     self->recursion_depth = PyThreadState_GET()->recursion_depth;
          |                                                  ^~~~~~~~~~~~~~~
          |                                                  recursion_limit
    error: command '/usr/bin/gcc' failed with exit code 1

unpickle_pure_python broken on Python 3.8 beta1

2019-06-06 08:14:37,403: [46/47] unpickle_pure_python...
2019-06-06 08:14:37,404: INFO:root:Running `/home/haypo/bench_tmpdir/venv/bin/python -u /home/haypo/pyperformance/pyperformance/benchmarks/bm_pickle.py --pure-python unpickle --verbose --output /tmp/tmp2jgtqf2c`
2019-06-06 08:14:37,471: Traceback (most recent call last):
2019-06-06 08:14:37,471:   File "/home/haypo/pyperformance/pyperformance/benchmarks/bm_pickle.py", line 287, in <module>
2019-06-06 08:14:37,471:     import pickle
2019-06-06 08:14:37,471:   File "/home/haypo/bench_tmpdir/prefix/lib/python3.9/pickle.py", line 39, in <module>
2019-06-06 08:14:37,471:     from _pickle import PickleBuffer
2019-06-06 08:14:37,471: ModuleNotFoundError: import of _pickle halted; None in sys.modules
2019-06-06 08:14:37,477: ERROR: Benchmark unpickle_pure_python failed: Benchmark died

Upstream issue closed as WONTFIX: https://bugs.python.org/issue37210

Issues running on Windows

E.g. on Windows the binary is named python.exe -- same on macOS -- and the binary is stored in a Scripts directory instead of bin (#2 tries to solve this). os.execv() also doesn't quite operate the way you think on (at least) Windows 10 as it immediately returns to PowerShell/cmd.exe and the process runs in the background of the prompt, seeming to pause on I/O unless the user interacts with the terminal, e.g. hitting enter (if you type exit at the prompt then the new process takes over the terminal window, although if you kill that process then you kill the entire terminal window instead of returning, including the end of the benchmark run).

Typo in assignment (benchmarks/bm_deltablue.py)

I think there is a typo in the assignment of the member Strength.STONG_PREFERRED (which should probably be Strength.STRONG_PREFERRED) in the deltablue benchmark script.

diff --git a/performance/benchmarks/bm_deltablue.py b/performance/benchmarks/bm_deltablue.py
index 798ef00..197beb4 100644
--- a/performance/benchmarks/bm_deltablue.py
+++ b/performance/benchmarks/bm_deltablue.py
@@ -81,7 +81,7 @@ class Strength(object):

 # This is a terrible pattern IMO, but true to the original JS implementation.
 Strength.REQUIRED = Strength(0, "required")
-Strength.STONG_PREFERRED = Strength(1, "strongPreferred")
+Strength.STRONG_PREFERRED = Strength(1, "strongPreferred")
 Strength.PREFERRED = Strength(2, "preferred")
 Strength.STRONG_DEFAULT = Strength(3, "strongDefault")
 Strength.NORMAL = Strength(4, "normal")

I don't think this is a problem as that symbol is not used anywhere, but I think it would be nice to fix.
Thanks!

A more pythonic Richards benchmark?

The richard's benchmark included with pyperformance seems to be a C -> C++ -> Java -> Python port. It makes heavy use of object oriented programming and doesn't look anything like the original C code.

I've been playing with the python2 version distributed from their website. I ported it to python3. It looks more like the C version, which has some good things and a few bad things:

+ Covers non-OO programming use case
- Overloads variables as in Union[int, Type1, Type2]

I'm in the process of improving it to Optional[Type1] and Optional[Type2]

My goal is to explore a dialect of statically typed python using modern pythonic constructs:

  • dataclasses
  • enums
  • python3 type annotations
  • static transpilation via py2many

The code is here: https://github.com/adsharma/richards-benchmark. Let me know if this sounds interesting and if you'd like to update the variant in the repo at some point in the future. I hear some of the faster cpython work is using these benchmarks.

Upgrade Tornado to version 5

I tried to upgrade tornado dependency from version 4.5.3 to 5.1.1, but it broke the benchmark:

  File "performance/benchmarks/bm_tornado_http.py", line 61, in bench_tornado
    sock = make_http_server(loop, make_application())
  File "performance/benchmarks/bm_tornado_http.py", line 51, in make_http_server
    server = HTTPServer(request_handler, io_loop=loop)
  File "/home/vstinner/prog/python/performance/.tox/py3/lib/python3.6/site-packages/tornado/util.py", line 312, in __new__
    instance.initialize(*args, **init_kwargs)
TypeError: initialize() got an unexpected keyword argument 'io_loop'

mako fails with 3.8.0b2 on Windows

INFO:root:Running `C:\Python38\venv\cpython3.8-2459c2ec2c3d\Scripts\python.exe -u C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperformance\benchmarks\bm_mako.py --output C:\Users\xy\AppData\Local\Temp\tmpdezyn81_`
Traceback (most recent call last):
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperformance\benchmarks\bm_mako.py", line 20, in <module>
    from mako.template import Template   # noqa
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\mako\template.py", line 10, in <module>
    from mako.lexer import Lexer
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\mako\lexer.py", line 11, in <module>
    from mako import parsetree, exceptions, compat
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\mako\parsetree.py", line 9, in <module>
    from mako import exceptions, ast, util, filters, compat
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\mako\exceptions.py", line 11, in <module>
    from mako import util, compat
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\mako\util.py", line 11, in <module>
    from mako import compat
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\mako\compat.py", line 123, in <module>
    time_func = time.clock
AttributeError: module 'time' has no attribute 'clock'
ERROR: Benchmark mako failed: Benchmark died

Executing bench.py w/o arguments raises an exception

If you run python bench.py you trigger an exception:

Traceback (most recent call last):
  File ".\bench.py", line 10, in <module>
    benchmark.cli.main()
  File "C:\Users\brcan\Documents\Repositories\benchmarks\benchmark\c
    if not options.inside_venv:
AttributeError: 'Namespace' object has no attribute 'inside_venv'

Potential TypeError in bm_chaos.py:68

In bm_chaos.py, in the definition of function GetKnots, I think that adding a list with a range object raises a TypeError in Python 3.x:

def GetKnots(points, degree):
    knots = [0] * degree + range(1, len(points) - degree)
    knots += [len(points) - degree] * degree
    return knots

This function isn't called in the actual benchmark due to a test in Spline init. Could we change the first line of GetKnots with

    knots = [0] * degree + list(range(1, len(points) - degree))

parse_cpu_list() causes the benchmark to fail

When using performance in my environment, it fails immediately due to an error:

Traceback (most recent call last):
File "/tmp/venv/lib/python2.7/site-packages/performance/benchmarks/bm_2to3.py", line 30, in
runner.bench_func('2to3', bench_2to3, command, devnull_out)
File "/tmp/venv/lib/python2.7/site-packages/perf/_runner.py", line 485, in bench_func
return self._main(task)
File "/tmp/venv/lib/python2.7/site-packages/perf/_runner.py", line 415, in _main
bench = self._worker(task)
File "/tmp/venv/lib/python2.7/site-packages/perf/_runner.py", line 389, in _worker
run = task.create_run()
File "/tmp/venv/lib/python2.7/site-packages/perf/_worker.py", line 293, in create_run
self.compute()
File "/tmp/venv/lib/python2.7/site-packages/perf/_worker.py", line 318, in compute
WorkerTask.compute(self)
File "/tmp/venv/lib/python2.7/site-packages/perf/_worker.py", line 285, in compute
metadata2 = self.collect_metadata()
File "/tmp/venv/lib/python2.7/site-packages/perf/_worker.py", line 350, in collect_metadata
return collect_metadata()
File "/tmp/venv/lib/python2.7/site-packages/perf/_collect_metadata.py", line 416, in collect_metadata
collect_cpu_metadata(metadata)
File "/tmp/venv/lib/python2.7/site-packages/perf/_collect_metadata.py", line 405, in collect_cpu_metadata
collect_cpu_config(metadata, all_cpus)
File "/tmp/venv/lib/python2.7/site-packages/perf/_collect_metadata.py", line 291, in collect_cpu_config
nohz_full = parse_cpu_list(nohz_full)
File "/tmp/venv/lib/python2.7/site-packages/perf/_cpu_utils.py", line 97, in parse_cpu_list
cpus.append(int(part))
ValueError: invalid literal for int() with base 10: ''

Tracing the problem I find the content of /sys/devices/system/cpu/nohz_full is "\x00\n". So int("\x00") fails.

Running with "run" argument considered ambigious

Not sure if this belongs here but right now running bench.py with run brings up this error. It seems like there's an active issue for this: https://bugs.python.org/issue14365

> ..\python_d.exe bench.py run
usage: bench.py [-h] {run,compare,run_compare,list,list_groups} ...
bench.py: error: argument action: ambiguous choice: 'run' could match run, run_compare

As a workaround I changed my run to run_local in https://github.com/python/benchmarks/blob/master/benchmark/cli.py#L84
and
https://github.com/python/benchmarks/blob/master/benchmark/cli.py#L150
and
https://github.com/python/benchmarks/blob/master/benchmark/run.py#L445

Set up Travis CI

With caching turn on in the .travis.yml file, the base set of benchmarks should complete for a single run before Travis times out and kills the VM. That would be enough to make sure simple things like the django -> django_template rename breakage don't occur again.

run --python doesn't seem to work

I tried running the benchmark with python3 -m performance run --python ../cpython/3.6/python.exe --rigorous -b all -o ../3.6-perf.json, but 3.6-perf.json says "python_version":"3.5.2 (64-bit)", but running ../cpython/3.6/python.exe says it's 3.6.0b1+.

'venv create' is failing with pip 10

pyperformance venv create has started failing for me recently, with the following message:

Execute: venv/cpython3.6-a7d80a339c76/bin/python -m pip install
ERROR: You must give at least one requirement to install (see "pip help install")
Command venv/cpython3.6-a7d80a339c76/bin/python -m pip install failed with exit code 1

I think this may be due to a behaviour change in pip 10.

I'm using performance v0.6.1 .

Full output below.

Creating the virtual environment venv/cpython3.6-a7d80a339c76
Execute: /home/mjw/opt/python3.6/bin/python3.6 -m venv --without-pip venv/cpython3.6-a7d80a339c76
Execute: venv/cpython3.6-a7d80a339c76/bin/python -c 'import sys; print(sys.hexversion)'
Python hexversion: 30602f0
Execute: venv/cpython3.6-a7d80a339c76/bin/python -m ensurepip --verbose
Ignoring indexes: https://pypi.python.org/simple
Collecting setuptools
  0 location(s) to search for versions of setuptools:
  Skipping link /tmp/tmpqak284a0 (from -f); not a file
  Skipping link file:///tmp/tmpqak284a0/pip-9.0.1-py2.py3-none-any.whl; wrong project name (not setuptools)
  Found link file:///tmp/tmpqak284a0/setuptools-28.8.0-py2.py3-none-any.whl, version: 28.8.0
  Local files found: /tmp/tmpqak284a0/setuptools-28.8.0-py2.py3-none-any.whl
  Using version 28.8.0 (newest of versions: 28.8.0)
Collecting pip
  0 location(s) to search for versions of pip:
  Found link file:///tmp/tmpqak284a0/pip-9.0.1-py2.py3-none-any.whl, version: 9.0.1
  Skipping link file:///tmp/tmpqak284a0/setuptools-28.8.0-py2.py3-none-any.whl; wrong project name (not pip)
  Local files found: /tmp/tmpqak284a0/pip-9.0.1-py2.py3-none-any.whl
  Using version 9.0.1 (newest of versions: 9.0.1)
Installing collected packages: setuptools, pip

  changing mode of /home/mjw/pyperf/venv/cpython3.6-a7d80a339c76/bin/easy_install-3.6 to 775

  changing mode of /home/mjw/pyperf/venv/cpython3.6-a7d80a339c76/bin/pip3 to 775
  changing mode of /home/mjw/pyperf/venv/cpython3.6-a7d80a339c76/bin/pip3.6 to 775
Successfully installed pip-9.0.1 setuptools-28.8.0
Cleaning up...
Execute: venv/cpython3.6-a7d80a339c76/bin/python -m pip --version
pip 9.0.1 from /home/mjw/pyperf/venv/cpython3.6-a7d80a339c76/lib/python3.6/site-packages (python 3.6)
Execute: venv/cpython3.6-a7d80a339c76/bin/python -m pip install -U 'setuptools>=18.5' 'pip>=6.0'
Collecting setuptools>=18.5
  Using cached https://files.pythonhosted.org/packages/20/d7/04a0b689d3035143e2ff288f4b9ee4bf6ed80585cc121c90bfd85a1a8c2e/setuptools-39.0.1-py2.py3-none-any.whl
Collecting pip>=6.0
  Using cached https://files.pythonhosted.org/packages/62/a1/0d452b6901b0157a0134fd27ba89bf95a857fbda64ba52e1ca2cf61d8412/pip-10.0.0-py2.py3-none-any.whl
Installing collected packages: setuptools, pip
  Found existing installation: setuptools 28.8.0
    Uninstalling setuptools-28.8.0:
      Successfully uninstalled setuptools-28.8.0
  Found existing installation: pip 9.0.1
    Uninstalling pip-9.0.1:
      Successfully uninstalled pip-9.0.1
Successfully installed pip-10.0.0 setuptools-39.0.1

Execute: venv/cpython3.6-a7d80a339c76/bin/python -m pip install -U wheel
Collecting wheel
  Using cached https://files.pythonhosted.org/packages/1b/d2/22cde5ea9af055f81814f9f2545f5ed8a053eb749c08d186b369959189a8/wheel-0.31.0-py2.py3-none-any.whl
Installing collected packages: wheel
Successfully installed wheel-0.31.0

Execute: venv/cpython3.6-a7d80a339c76/bin/python -m pip install
ERROR: You must give at least one requirement to install (see "pip help install")
Command venv/cpython3.6-a7d80a339c76/bin/python -m pip install failed with exit code 1

Remove directory venv/cpython3.6-a7d80a339c76

Mac OS darwin doesn't use .exe once make install is run

During the compile_all stage, the script tries to print the version on the screen.

If install=True in the configuration, make install is run. Darwin does compile the CPython binary as python.exe but once you run make install it corrects it to the right name.

I can see the fault in the logic here.
https://github.com/python/performance/blob/f9ebee09652f77ca01736f9b034a0bb0d8826a45/performance/compile.py#L301

2018-03-31 20:20:25,352: Installed Python version:
2018-03-31 20:20:25,353: + /Users/anthonyshaw/repo/python_comparison/bench_tmpdir/prefix/bin/python3.exe --version
Traceback (most recent call last):
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 193, in _run_module_as_main
    "__main__", mod_spec)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/runpy.py", line 85, in _run_code
    exec(code, run_globals)
  File "/Users/anthonyshaw/repo/python_comparison/env/lib/python3.6/site-packages/performance/__main__.py", line 2, in <module>
    performance.cli.main()
  File "/Users/anthonyshaw/repo/python_comparison/env/lib/python3.6/site-packages/performance/cli.py", line 226, in main
    _main()
  File "/Users/anthonyshaw/repo/python_comparison/env/lib/python3.6/site-packages/performance/cli.py", line 190, in _main
    cmd_compile(options)
  File "/Users/anthonyshaw/repo/python_comparison/env/lib/python3.6/site-packages/performance/compile.py", line 921, in cmd_compile
    bench.main()
  File "/Users/anthonyshaw/repo/python_comparison/env/lib/python3.6/site-packages/performance/compile.py", line 656, in main
    failed = self.compile_bench()
  File "/Users/anthonyshaw/repo/python_comparison/env/lib/python3.6/site-packages/performance/compile.py", line 641, in compile_bench
    self.compile_install()
  File "/Users/anthonyshaw/repo/python_comparison/env/lib/python3.6/site-packages/performance/compile.py", line 459, in compile_install
    self.python.compile_install()
  File "/Users/anthonyshaw/repo/python_comparison/env/lib/python3.6/site-packages/performance/compile.py", line 372, in compile_install
    self.get_version()
  File "/Users/anthonyshaw/repo/python_comparison/env/lib/python3.6/site-packages/performance/compile.py", line 312, in get_version
    self.run(self.program, '--version')
  File "/Users/anthonyshaw/repo/python_comparison/env/lib/python3.6/site-packages/performance/compile.py", line 63, in run
    self.app.run(*cmd, cwd=self.cwd, **kw)
  File "/Users/anthonyshaw/repo/python_comparison/env/lib/python3.6/site-packages/performance/compile.py", line 195, in run
    exitcode = self.run_nocheck(*cmd, **kw)
  File "/Users/anthonyshaw/repo/python_comparison/env/lib/python3.6/site-packages/performance/compile.py", line 174, in run_nocheck
    proc = self.create_subprocess(cmd, **kwargs)
  File "/Users/anthonyshaw/repo/python_comparison/env/lib/python3.6/site-packages/performance/compile.py", line 158, in create_subprocess
    return subprocess.Popen(cmd, **kwargs)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/subprocess.py", line 707, in __init__
    restore_signals, start_new_session)
  File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/subprocess.py", line 1326, in _execute_child
    raise child_exception_type(errno_num, err_msg)
FileNotFoundError: [Errno 2] No such file or directory: '/Users/anthonyshaw/repo/python_comparison/bench_tmpdir/prefix/bin/python3.exe'
2018-03-31 20:20:25,405: Command /Users/anthonyshaw/repo/python_comparison/env/bin/python3.6 -m performance compile /Users/anthonyshaw/repo/python_comparison/testing.conf 3.6 3.6 --no-update --no-tune failed with exit code 1

chameleon benchmark fails on the master branch of Python

performance 0.7.0 fails on the master branch of Python (future Python 3.8), because Chameleon has no handler for AST node type ast.Constant.

performance 0.7.0 uses Chameleon 3.4. I see two options:

  • Disable/remove the benchmark
  • Fix Chameleon, wait for a new release, upgrade Chameleon in performance

Chameleon project homepage: https://github.com/malthe/chameleon/

cc @serhiy-storchaka @methane

Full traceback:

2018-10-16 15:28:43,656: [ 2/47] chameleon...
2018-10-16 15:28:43,656: INFO:root:Running `/home/haypo/bench_tmpdir/venv/bin/python -u /home/haypo/performance/performance/benchmarks/bm_chameleon.py --verbose --output /tmp/tmp85spznrk`
2018-10-16 15:28:43,842: Traceback (most recent call last):
2018-10-16 15:28:43,842:   File "/home/haypo/performance/performance/benchmarks/bm_chameleon.py", line 36, in <module>
2018-10-16 15:28:43,843:     main()
2018-10-16 15:28:43,843:   File "/home/haypo/performance/performance/benchmarks/bm_chameleon.py", line 26, in main
2018-10-16 15:28:43,843:     tmpl = PageTemplate(BIGTABLE_ZPT)
2018-10-16 15:28:43,843:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/zpt/template.py", line 192, in __init__
2018-10-16 15:28:43,843:     super(PageTemplate, self).__init__(body, **config)
2018-10-16 15:28:43,843:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/template.py", line 128, in __init__
2018-10-16 15:28:43,843:     self.write(body)
2018-10-16 15:28:43,843:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/template.py", line 221, in write
2018-10-16 15:28:43,843:     self.cook(body)
2018-10-16 15:28:43,843:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/template.py", line 158, in cook
2018-10-16 15:28:43,843:     program = self._cook(body, digest, names)
2018-10-16 15:28:43,843:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/template.py", line 231, in _cook
2018-10-16 15:28:43,844:     source = self._compile(body, builtins)
2018-10-16 15:28:43,844:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/template.py", line 265, in _compile
2018-10-16 15:28:43,844:     compiler = Compiler(
2018-10-16 15:28:43,844:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/compiler.py", line 949, in __init__
2018-10-16 15:28:43,844:     generator = TemplateCodeGenerator(module, source)
2018-10-16 15:28:43,844:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/codegen.py", line 116, in __init__
2018-10-16 15:28:43,844:     super(TemplateCodeGenerator, self).__init__(tree)
2018-10-16 15:28:43,844:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/astutil.py", line 226, in __init__
2018-10-16 15:28:43,844:     self.visit(tree)
2018-10-16 15:28:43,844:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/codegen.py", line 201, in visit
2018-10-16 15:28:43,844:     super(TemplateCodeGenerator, self).visit(node)
2018-10-16 15:28:43,845:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/astutil.py", line 286, in visit
2018-10-16 15:28:43,845:     ret = visitor(node)
2018-10-16 15:28:43,845:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/codegen.py", line 119, in visit_Module
2018-10-16 15:28:43,845:     super(TemplateCodeGenerator, self).visit_Module(node)
2018-10-16 15:28:43,845:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/astutil.py", line 293, in visit_Module
2018-10-16 15:28:43,845:     self.visit(n)
2018-10-16 15:28:43,845:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/codegen.py", line 201, in visit
2018-10-16 15:28:43,845:     super(TemplateCodeGenerator, self).visit(node)
2018-10-16 15:28:43,845:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/astutil.py", line 286, in visit
2018-10-16 15:28:43,845:     ret = visitor(node)
2018-10-16 15:28:43,845:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/astutil.py", line 387, in visit_Assign
2018-10-16 15:28:43,846:     self.visit(node.value)
2018-10-16 15:28:43,846:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/codegen.py", line 201, in visit
2018-10-16 15:28:43,846:     super(TemplateCodeGenerator, self).visit(node)
2018-10-16 15:28:43,846:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/astutil.py", line 286, in visit
2018-10-16 15:28:43,846:     ret = visitor(node)
2018-10-16 15:28:43,846:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/astutil.py", line 829, in visit_Call
2018-10-16 15:28:43,846:     self.visit(arg)
2018-10-16 15:28:43,846:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/codegen.py", line 201, in visit
2018-10-16 15:28:43,846:     super(TemplateCodeGenerator, self).visit(node)
2018-10-16 15:28:43,846:   File "/home/haypo/bench_tmpdir/venv/lib/python3.8/site-packages/chameleon/astutil.py", line 284, in visit
2018-10-16 15:28:43,846:     raise Exception('No handler for ``%s`` (%s).' % (
2018-10-16 15:28:43,846: Exception: No handler for ``Constant`` (<_ast.Constant object at 0x7f42c52f8b80>).
2018-10-16 15:28:43,859: ERROR: Benchmark chameleon failed: Benchmark died

Improving representative benchmarks for typing ecosystem

Due to a current lack of representative macrobenchmarks, it is very difficult to decide on whether complex accelerators for some parts of typing are worth implementing in the future. Hence, I'm trying to upstream some benchmarks into pyperformance.

IMO, there are three main areas:

  1. Performance of static type checkers implemented in Python (e.g. mypy). (Fixed by #102)
  2. Performance of programs using types at runtime (e.g. pydantic, attrs, etc.).
  3. Runtime overhead of typed code vs fully untyped code.

For case 2, I plan to use one of pydantic's benchmarks here https://github.com/samuelcolvin/pydantic/tree/master/benchmarks, installed without compiled binaries.

Case 3 is very tricky because there are so many ways to use typing. I don't know how often people use certain features, whether they type-hint inside tight loops, etc. So I'm struggling to find a good benchmark. An idea: grabbing one of the existing pyperformance benchmarks, fully type-hinting it, then comparing the performance delta may work.

CC @JelleZijlstra, I would greatly appreciate hearing your opinion on this (especially for case 3). Maybe I can post this on typing-sig too if I need more help.

Afterword:
All 3 cases benefit from general CPython optimizations. But usually only 3. benefits greatly from typing module-only optimizations (with 1. maybe not improving much if at all, depending on implementation).

[Docs] Example for compile_all_revisions in config file

Hi all,

Recently I've been trying to benchmark a commit in cpython master branch against another commit in the same branch. The docs provide an example doc/benchmark.conf.sample file and describe it as

[compile_all_revisions]
list of 'sha1=' (default branch: 'master') or 'sha1=branch'

I spent a few hours digging through the compile.py code to figure out what I was doing wrong before I finally realized I misinterpreted the docs. So I'd like to submit a PR to update the docs to make it slightly less confusing.

Thanks for your time!

sqlalchemy fails with 3.8.0b2

INFO:root:Running `C:\Python38\venv\cpython3.8-2459c2ec2c3d\Scripts\python.exe -u C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperformance\benchmarks\bm_sqlalchemy_imperative.py --output C:\Users\xy\AppData\Local\Temp\tmpidjg6d0q`
Traceback (most recent call last):
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperformance\benchmarks\bm_sqlalchemy_imperative.py", line 4, in <module>
    from sqlalchemy import Column, ForeignKey, Integer, String, Table, MetaData
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\sqlalchemy\__init__.py", line 8, in <module>
    from . import util as _util  # noqa
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\sqlalchemy\util\__init__.py", line 14, in <module>
    from ._collections import coerce_generator_arg  # noqa
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\sqlalchemy\util\_collections.py", line 16, in <module>
    from .compat import binary_types
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\sqlalchemy\util\compat.py", line 331, in <module>
    time_func = time.clock
AttributeError: module 'time' has no attribute 'clock'
ERROR: Benchmark sqlalchemy_imperative failed: Benchmark died

--track-memory not work

pyperformance run -m simply fails:

[ 1/50] 2to3...
INFO:root:Running /tmp/venv/bin/python -u /usr/lib/python2.8/site-packages/performance-0.6.1-py2.8.egg/performance/benchmarks/bm_2to3.py --track-memory --output /tmp/tmpmT5Q1F
.ERROR: --worker requires --loops=N or --calibrate-loops
Traceback (most recent call last):
File "/usr/lib/python2.8/site-packages/performance-0.6.1-py2.8.egg/performance/benchmarks/bm_2to3.py", line 30, in
runner.bench_func('2to3', bench_2to3, command, devnull_out)
File "/usr/lib/python2.8/site-packages/perf-1.4-py2.8.egg/perf/_runner.py", line 485, in bench_func
return self._main(task)
File "/usr/lib/python2.8/site-packages/perf-1.4-py2.8.egg/perf/_runner.py", line 420, in _main
bench = self._master()
File "/usr/lib/python2.8/site-packages/perf-1.4-py2.8.egg/perf/_runner.py", line 543, in _master
bench = Master(self).create_bench()
File "/usr/lib/python2.8/site-packages/perf-1.4-py2.8.egg/perf/_master.py", line 221, in create_bench
worker_bench, run = self.create_worker_bench()
File "/usr/lib/python2.8/site-packages/perf-1.4-py2.8.egg/perf/_master.py", line 120, in create_worker_bench
suite = self.create_suite()
File "/usr/lib/python2.8/site-packages/perf-1.4-py2.8.egg/perf/_master.py", line 114, in create_suite
suite = self.spawn_worker(0, 0)
File "/usr/lib/python2.8/site-packages/perf-1.4-py2.8.egg/perf/_master.py", line 97, in spawn_worker
% (cmd[0], exitcode))
RuntimeError: /tmp/venv/bin/python failed with exit code 1
ERROR: Benchmark 2to3 failed: Benchmark died
Traceback (most recent call last):
File "/usr/lib/python2.8/site-packages/performance-0.6.1-py2.8.egg/performance/run.py", line 132, in run_benchmarks
bench = func(cmd_prefix, options)
File "/usr/lib/python2.8/site-packages/performance-0.6.1-py2.8.egg/performance/benchmarks/init.py", line 88, in BM_2to3
return run_perf_script(python, options, "2to3")
File "/usr/lib/python2.8/site-packages/performance-0.6.1-py2.8.egg/performance/run.py", line 98, in run_perf_script
run_command(cmd, hide_stderr=not options.verbose)
File "/usr/lib/python2.8/site-packages/performance-0.6.1-py2.8.egg/performance/run.py", line 66, in run_command
raise RuntimeError("Benchmark died")
RuntimeError: Benchmark died

pyperformance version is 0.6.1, perf version is 1.4, kernel version is 3.10.0.

"upload" only works with "compiled" benchmarks

My workflow for PyPy is

  1. download latest version compiled on buildbot,
  2. run benchmarks,
  3. upload data (using command line option upload, which goes through compile.py, but should it?.

The cmd_upload function in compile.py assumes BenchmarkRevision.update_metadata from compile.py has been run to add commit information (commit_id, commit_branch, commit_date) and possibly patch_file to the json benchmark info. But because I am not running compile this info is lacking. Possible solutions:

  • add a download_prebuilt option to compile.py and the Python class to support this workflow
  • refactor the update_metadata method from BenchmarkRevision to become part of every benchmark run

Any thoughts? Should there be an upload.py separate from compile.py?

pyperformance run --python= argument doesn't work on Windows

Steps to reproduce:

> pyperformance run --python="C:\User\GitHub\cpython\PCbuild\amd64\python.exe"
Creating the virtual environment venv\cpython3.10-3bfab7affa2d
...
... <installing packages>
...
INFO:root:Running `C:\User\GitHub\cpython\PCbuild\amd64\python.exe -u D:\User\Downloads\python\venv\cpython3.10-3bfab7affa2d\Lib\site-packages\pyperformance\benchmarks\bm_xml_etree.py --output C:\Users\User\AppData\Local\Temp\tmpo_57rqul`
Traceback (most recent call last):
  File "D:\User\Downloads\python\venv\cpython3.10-3bfab7affa2d\Lib\site-packages\pyperformance\benchmarks\bm_xml_etree.py", line 17, in <module>
    import pyperf
ModuleNotFoundError: No module named 'pyperf'
ERROR: Benchmark xml_etree failed: Benchmark died
Traceback ...
RuntimeError: Benchmark died

For some reason, pyperformance runs the benchmark by calling C:\User\GitHub\cpython\PCbuild\amd64\python.exe. I think it should be calling the venv's python (venv\cpython3.10-3bfab7affa2d\Scripts\python.exe) instead.

pyperformance issue when run in a venv created by virtualenv command

hg_startup failed on the py3 job of Travis CI. I had to disable again this benchmark on Python 3. It's an issue related to virtual environment which is not specific to Mercurial.

Travis CI runs Python in a virtual environment created by the "virtualenv" command which overrides the site.py module.

runtests.py then creates a second virtual environment using "python3 -m venv" but this second venv inherits modules of the first venv, whereas pyperformance requires a virtual environment isolated from the system to get reproducible results.

Example without pyperformance:

# create venv1 and venv2
vstinner@apu$ virtualenv -p python3 venv1
vstinner@apu$ venv1/bin/python -m venv venv2

# install mercurial in venv1
vstinner@apu$ venv1/bin/python -m pip install mercurial # in venv1

# ... it's available in venv2!!!
vstinner@apu$ venv2/bin/python -c 'import mercurial; print(mercurial.__file__)' # in venv2
/home/vstinner/venv1/lib/python3.7/site-packages/mercurial/__init__.py

# venv1 uses /home/vstinner/venv1/lib/python3.7/site-packages path
vstinner@apu$ venv1/bin/python -m site
sys.path = [
    '/home/vstinner',
    '/usr/share/qa-tools/python-modules',
    '/home/vstinner/venv1/lib64/python37.zip',
    '/home/vstinner/venv1/lib64/python3.7',
    '/home/vstinner/venv1/lib64/python3.7/lib-dynload',
    '/usr/lib64/python3.7',
    '/usr/lib/python3.7',
    '/home/vstinner/venv1/lib/python3.7/site-packages',
]
USER_BASE: '/home/vstinner/.local' (exists)
USER_SITE: '/home/vstinner/.local/lib/python3.7/site-packages' (exists)
ENABLE_USER_SITE: False

# venv2 uses /home/vstinner/venv1/lib/python3.7/site-packages path as well!!!
vstinner@apu$ venv2/bin/python -m site
sys.path = [
    '/home/vstinner',
    '/usr/share/qa-tools/python-modules',
    '/home/vstinner/venv1/lib64/python37.zip',
    '/home/vstinner/venv1/lib64/python3.7',
    '/home/vstinner/venv1/lib64/python3.7/lib-dynload',
    '/usr/lib64/python3.7',
    '/usr/lib/python3.7',
    '/home/vstinner/venv1/lib/python3.7/site-packages',
]
USER_BASE: '/home/vstinner/.local' (exists)
USER_SITE: '/home/vstinner/.local/lib/python3.7/site-packages' (exists)
ENABLE_USER_SITE: False

# mercurial is installed in venv1...
vstinner@apu$ venv1/bin/python -m pip list
Package    Version
---------- -------
mercurial  5.0.2  
pip        19.2.1 
setuptools 41.0.1 
wheel      0.33.4 

# ... but it's also "installed" in venv2!!!
vstinner@apu$ venv2/bin/python -m pip list
Package    Version
---------- -------
mercurial  5.0.2  
pip        19.2.1 
setuptools 41.0.1 
wheel      0.33.4 

# it's even considered as a "local" install in venv2
vstinner@apu$ venv2/bin/python -m pip list --local
Package    Version
---------- -------
mercurial  5.0.2  
pip        19.2.1 
setuptools 41.0.1 
wheel      0.33.4 

Add benchmarks which Sam Gross' NO-GIL implementation used.

While I discover Sam's project, I noticed that there were several benchmarks that we do not currently use.
https://github.com/colesbury/nogil/tree/nogil/benchmarks

IMHO, the following lists are good candidates to be added. If other core devs agree to add those tests.
I will try to keep in touch with Sam about this issue :)

Regards,
Dong-hee

bm_pickle.is_module_accelerated() is named confusingly

The function returns True if the module is not accelerated. Either it should be renamed, or the sense of the return value should be inverted; either way both call sites should be updated, because currently they read backwards.

benchmark for python2.7

can it be used for performance testing of python-2.7.
I build the python from the source code as folows -
/home/ci/Python-2.7.13/configure CC=/home/gcc/gcc-7.1.0.install/bin/gcc --build=powerpc64le-linux-gnu --host=powerpc64le-linux-gnu --target=powerpc64le-linux-gnu --enable-shared --with-ensurepip="install" --enable-optimizations --prefix=/home/ci/Python-2.7.13.install --enable-unicode=ucs4

make -j 64
make altinstall
after building python in the Python-2.7.13.install directoy, I went to this install directory and run the command as follows -
bin/python2.7 -m pip install performance
after triggering the above command, I dont see any binary for pyperformance. So this benchamrk is only applicable to python3 or i am doing some mistakes to install them properly. can you please help me out.

Kind Regards,
Pintu

Tornado issue

Hi,
I'm trying to run the bm_tornado_http.py with "tornado 6.0.3" and I got the following errors:
"
Traceback (most recent call last):
File "bm_tornado_http.py", line 17, in
from tornado.gen import coroutine, Task
ImportError: cannot import name 'Task'
"
I think it's caused by the wrong version of the tornado but I have no idea how to fix it.

Thanks

Parse failure when `benchmarks` in config contains only negative filter.

How to reproduce the issue

pyperformance compile_all test.conf, where test.conf contains something like:

[run_benchmark]
benchmarks = -tornado_http # single negative filter

What you expected to happen
All benchmarks except the specified one run.

What actually happens
...error!

... after pgo and a lot of stuffs ...
2021-10-21 01:34:06,050: The virtual environment /root/bench_tmpdir/venv has been created
2021-10-21 01:34:06,063: + /root/bench_tmpdir/prefix/bin/python3 -u -m pyperformance run --verbose --output /root/json/2021-10-17_23-20-base/main-54a4e1b53a18.json.gz --benchmarks -tornado_http --venv /root/bench_tmpdir/venv
2021-10-21 01:34:06,123: usage: __main__.py run [-h] [-r] [-f] [--debug-single-value] [-v] [-m]
2021-10-21 01:34:06,123:                        [--affinity CPU_LIST] [-o FILENAME] [--append FILENAME]
2021-10-21 01:34:06,123:                        [-b BM_LIST] [--inherit-environ VAR_LIST]
2021-10-21 01:34:06,123:                        [--inside-venv] [-p PYTHON] [--venv VENV]
2021-10-21 01:34:06,123: __main__.py run: error: argument -b/--benchmarks: expected one argument
2021-10-21 01:34:06,132: Command /root/bench_tmpdir/prefix/bin/python3 -u -m pyperformance run --verbose --output /root/json/2021-10-17_23-20-base/main-54a4e1b53a18.json.gz --benchmarks -tornado_http --venv /root/bench_tmpdir/venv failed with exit code 2
2021-10-21 01:34:06,132: Benchmark completed in 0:26:33.493049
2021-10-21 01:34:06,132: Benchmark failed but results written into /root/json/2021-10-17_23-20-base/main-54a4e1b53a18.json.gz
2021-10-21 01:34:06,145: Command /usr/bin/python3 -m pyperformance compile /root/pyperformance.conf 54a4e1b53a18f0c7420ba03de9608194c4413fc2 base/main --no-update --no-tune failed with exit code 12
2021-10-21 01:34:06,145: Benchmark exit code: 12
2021-10-21 01:34:06,145: FAILED: base/main-54a4e1b53a18f0c7420ba03de9608194c4413fc2

This looks trivial and I'll submit a PR to resolve this.

tornado fails with 3.8.0b2 on Windows

INFO:root:Running `C:\Python38\venv\cpython3.8-2459c2ec2c3d\Scripts\python.exe -u C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperformance\benchmarks\bm_tornado_http.py --output C:\Users\xy\AppData\Local\Temp\tmpdrgssiij`
Traceback (most recent call last):
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperformance\benchmarks\bm_tornado_http.py", line 96, in <module>
    runner.bench_time_func('tornado_http', bench_tornado)
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperf\_runner.py", line 458, in bench_time_func
    return self._main(task)
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperf\_runner.py", line 423, in _main
    bench = self._worker(task)
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperf\_runner.py", line 397, in _worker
    run = task.create_run()
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperf\_worker.py", line 293, in create_run
    self.compute()
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperf\_worker.py", line 331, in compute
    WorkerTask.compute(self)
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperf\_worker.py", line 280, in compute
    self.calibrate_loops()
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperf\_worker.py", line 243, in calibrate_loops
    self._compute_values(self.warmups, nvalue,
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperf\_worker.py", line 76, in _compute_values
    raw_value = self.task_func(self, self.loops)
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperf\_runner.py", line 454, in task_func
    return time_func(loops, *args)
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperformance\benchmarks\bm_tornado_http.py", line 60, in bench_tornado
    server, sock = make_http_server(make_application())
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\pyperformance\benchmarks\bm_tornado_http.py", line 54, in make_http_server
    server.add_sockets(sockets)
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\tornado\tcpserver.py", line 157, in add_sockets
    self._handlers[sock.fileno()] = add_accept_handler(
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\tornado\netutil.py", line 268, in add_accept_handler
    io_loop.add_handler(sock, accept_handler, IOLoop.READ)
  File "C:\Python38\venv\cpython3.8-2459c2ec2c3d\lib\site-packages\tornado\platform\asyncio.py", line 79, in add_handler
    self.asyncio_loop.add_reader(
  File "C:\Python38\lib\asyncio\events.py", line 501, in add_reader
    raise NotImplementedError
NotImplementedError

tornado benchmark fails on Python 3.8.0a4+

  File "/home/haypo/performance/performance/benchmarks/bm_tornado_http.py", line 60, in bench_tornado
    sock = make_http_server(make_application())
  File "/home/haypo/performance/performance/benchmarks/bm_tornado_http.py", line 54, in make_http_server
    server.add_sockets(sockets)
  File "lib/python3.8/site-packages/tornado/tcpserver.py", line 157, in add_sockets
    self._handlers[sock.fileno()] = add_accept_handler(
  File "lib/python3.8/site-packages/tornado/netutil.py", line 268, in add_accept_handler
    io_loop.add_handler(sock, accept_handler, IOLoop.READ)
  File "lib/python3.8/site-packages/tornado/platform/asyncio.py", line 76, in add_handler
    raise ValueError("fd %s added twice" % fd)
fd 3 added twice

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.