Coder Social home page Coder Social logo

smarr / are-we-fast-yet Goto Github PK

View Code? Open in Web Editor NEW
317.0 317.0 36.0 3.87 MB

Are We Fast Yet? Comparing Language Implementations with Objects, Closures, and Arrays

License: Other

Shell 0.77% R 0.61% Java 18.82% Ruby 10.93% JavaScript 11.80% Makefile 0.05% Crystal 11.54% Smalltalk 4.67% Lua 14.13% Python 13.36% C++ 13.31%
benchmark benchmarking comparison dynamic-languages language language-implementations performance

are-we-fast-yet's People

Contributors

charig avatar chrisseaton avatar eregon avatar fniephaus avatar fperrad avatar krono avatar raphaelvigee avatar smarr avatar timfel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

are-we-fast-yet's Issues

Common Lisp implementation questions

Hello, I'm starting a Common Lisp version at https://github.com/q3cpma/are-we-fast-yet/tree/common-lisp and I have some questions about it.

  • Am I supposed to set the maximum optimization flags?
  • How about the code itself? Since CL has gradual typing, adding some types here and there can have some significant impact.
  • I am using CLOS for now (except for som:random) but I may try to use structs at some point, for optimisation purposes and since the dynamic features of CLOS don't seem to apply. Does it make sense to you to have another version or something?

Any comment on the code (https://github.com/q3cpma/are-we-fast-yet/tree/common-lisp/benchmarks/Common-Lisp) or on which implementations may be interesting? Couldn't make clisp work, but I intend to try Clasp.

Port Benchmarks to C++ using smart pointers

shared_ptr might be sufficient, not sure whether there are any relevant circular data structures.
But those might be easily worked around (breaking circularity explicitly)

Update and cleanup performance report

Needs to be updated with latest data and structured/document so that it gives a good overview.

Colors of plots should be aligned as done in the other examples.

Comparison chart not working, raw data access question

https://awfy-speed.stefan-marr.de/comparison/ is a great idea, but somehow I don't manage to see the bars; neither when all executables are selected, nor when I make the subset I'm interested in. Here is what I see in Firefox 78:
grafik
Looks the same in Chromium.

Is there a way to get access to the raw data behind the graphs? E.g. I would like to know what options SOM++ was compiled with, and which versions were used of the executables and benchmarks.

Many thanks.

Make repo self-contained and all benchmarks executable

  • include automate setup for all non-standard VMs
  • include instructions how to obtain GraalVM
  • document software requirements

The goal is that a setup script builds everything, and can run at least most experiments automatically.

Port benchmarks to Pharo/Squeak

Port all benchmarks to Pharo/Squeak and add them to setup.

Consider using the AweSOM SOM parser as foundation, adapt it to load classes and methods into Squeak/Pharo.
That way, we might be able to avoid having multiple copies of the benchmarks. Or, at least have them in a format that is easily kept in sync.

/cc @charig, might be relevant for you. But haven't started with it yet.

CD does time out on Pharo 5.0

With a problem size of 10, as in the tests, CD runs fine, but with larger problem sizes, it does not finish.

For example:
pharo AWFY.image run.st CD 100 100

DeltaBlue sort code

DeltaBlue currently calls into sort code, which is partially implemented only.
Though, it seems, the code only sees vectors of size 0 and 1 anyway.
So, we might best replace it with a len < 2 test and an exception.
This would avoid having to translate the code across languages.

Missing header

The .c files require harness.h but it doesn't seem to exist.

Recent performance results (tables) of AWFY

The results document is useful but didn't change since eight years.

Are there any updated performance tables somewhere like this one and this one comparing different Smalltalk, Lua and JS implementations with Java (and possibly C++)? If there are no current results, is there at least a complete, referencable table somewhere with the latest measurement results, and ideally a brief description of the candidate VMs?

The timeline view referenced in the readme doesn't seem to work anymore, and personally I consider simple tables to be more useful than graphs.

I also had a look at https://arewefastyet.com which seems to have changed and now compares browser JS VMs only.

Update JavaScript benchmarks to use ECMAScript classes

It seems the time to switch to using the class syntax.

It's established enough, and easily available in Node.js.
For backwards compatibility, I may need to think about generating more compatible version using Babel or similar.

Should frozen_string_literal be used for Ruby benchmarks?

For instance on Json it seems like it would make sense given there are lots of String literals, which are actually allocations without # frozen_string_literal: true as first line.
From a quick local run, it's faster on CRuby with frozen_string_literal and it is about the same for TruffleRuby:

CRuby: ~428091 -> ~290305
TruffleRuby: 66057us -> 65282us

OTOH, maybe we want to measure how well the VM optimizes?

Ruby code in the wild is a mix of # frozen_string_literal: true and not using it. Ruby 3 still isn't using that by default over compatibility concerns. I'd think nowadays for performance-sensitive files people typically use # frozen_string_literal: true.

C++ version of the benchmark

In case anyone is interested, here is a C++98 implementation of the benchmark: https://github.com/rochus-keller/Are-we-fast-yet/tree/main/Cpp.

I have tried to apply the guidelines mutatis mutandis, and made a compromise between how a C++98 developer would have implemented it, and being faithful to the existing Java code as far as not wasting too much performance. I also tried to avoid unnecessary rewriting. Basically we can discuss all decisions; we just have to commit to something.

There is also an existing Oberon+ version and currently I'm planning for a FreePascal version.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.