smarr / are-we-fast-yet Goto Github PK
View Code? Open in Web Editor NEWAre We Fast Yet? Comparing Language Implementations with Objects, Closures, and Arrays
License: Other
Are We Fast Yet? Comparing Language Implementations with Objects, Closures, and Arrays
License: Other
Port all benchmarks to Scala and add it to the setup.
Hello, I'm starting a Common Lisp version at https://github.com/q3cpma/are-we-fast-yet/tree/common-lisp and I have some questions about it.
Any comment on the code (https://github.com/q3cpma/are-we-fast-yet/tree/common-lisp/benchmarks/Common-Lisp) or on which implementations may be interesting? Couldn't make clisp work, but I intend to try Clasp.
shared_ptr
might be sufficient, not sure whether there are any relevant circular data structures.
But those might be easily worked around (breaking circularity explicitly)
Port all benchmarks to Python 2.x and add PyPy and CPython to setup.
Structure based on https://github.com/smarr/ReBench/blob/master/CHANGELOG.md
The dictionary tracks number of sets instead of number of stored elements (should not count overwrites).
See smarr/SOMns-corelib@44a1088
I am currently starting to use asdf to setup my own benchmarking infrastructure.
This seems a comprehensive enough project to support a broad range of the language implementations we may want to compare.
@eregon any thoughts? It seems fine from what I have seen so far, but perhaps you know about rough edges?
Needs to be updated with latest data and structured/document so that it gives a good overview.
Colors of plots should be aligned as done in the other examples.
List and describe benchmarks on high level.
Also include note on origin.
Main challenge are the few polymorphic elements for instance in DeltaBlue.
See https://github.com/smarr/are-we-fast-yet/blob/master/benchmarks/Ruby/som.rb#L138
vs. https://github.com/smarr/are-we-fast-yet/blob/master/benchmarks/JavaScript/som.js#L135
probably also an issue in Crystal.
Add the other SOM implementations to the setup.
For instance in Ruby, the total is going to be a float: https://github.com/smarr/are-we-fast-yet/blob/master/benchmarks/Ruby/harness.rb#L47
Make sure, these fields use the proper type for initialization.
https://awfy-speed.stefan-marr.de/comparison/ is a great idea, but somehow I don't manage to see the bars; neither when all executables are selected, nor when I make the subset I'm interested in. Here is what I see in Firefox 78:
Looks the same in Chromium.
Is there a way to get access to the raw data behind the graphs? E.g. I would like to know what options SOM++ was compiled with, and which versions were used of the executables and benchmarks.
Many thanks.
The goal is that a setup script builds everything, and can run at least most experiments automatically.
Port all benchmarks to Dart and add Dart VM to setup.
Add the data and discussion of metrics to the documentation.
Port all benchmarks to Pharo/Squeak and add them to setup.
Consider using the AweSOM SOM parser as foundation, adapt it to load classes and methods into Squeak/Pharo.
That way, we might be able to avoid having multiple copies of the benchmarks. Or, at least have them in a format that is easily kept in sync.
/cc @charig, might be relevant for you. But haven't started with it yet.
With a problem size of 10, as in the tests, CD runs fine, but with larger problem sizes, it does not finish.
For example:
pharo AWFY.image run.st CD 100 100
DeltaBlue currently calls into sort code, which is partially implemented only.
Though, it seems, the code only sees vectors of size 0 and 1 anyway.
So, we might best replace it with a len < 2
test and an exception.
This would avoid having to translate the code across languages.
PR #67 does that for Ruby.
Should check that the other languages are aligned.
This is mostly to simplify maintenance, keeping languages in sync, and knowing where to find things.
The .c files require harness.h
but it doesn't seem to exist.
The results document is useful but didn't change since eight years.
Are there any updated performance tables somewhere like this one and this one comparing different Smalltalk, Lua and JS implementations with Java (and possibly C++)? If there are no current results, is there at least a complete, referencable table somewhere with the latest measurement results, and ideally a brief description of the candidate VMs?
The timeline view referenced in the readme doesn't seem to work anymore, and personally I consider simple tables to be more useful than graphs.
I also had a look at https://arewefastyet.com which seems to have changed and now compares browser JS VMs only.
It seems the time to switch to using the class syntax.
It's established enough, and easily available in Node.js.
For backwards compatibility, I may need to think about generating more compatible version using Babel or similar.
For instance on Json
it seems like it would make sense given there are lots of String literals, which are actually allocations without # frozen_string_literal: true
as first line.
From a quick local run, it's faster on CRuby with frozen_string_literal
and it is about the same for TruffleRuby:
CRuby: ~428091 -> ~290305
TruffleRuby: 66057us -> 65282us
OTOH, maybe we want to measure how well the VM optimizes?
Ruby code in the wild is a mix of # frozen_string_literal: true
and not using it. Ruby 3 still isn't using that by default over compatibility concerns. I'd think nowadays for performance-sensitive files people typically use # frozen_string_literal: true
.
There seems to be an Oberon port here:
https://github.com/rochus-keller/Oberon/tree/master/testcases/Are-we-fast-yet
Port all benchmarks to Go and add it to the setup.
In case anyone is interested, here is a C++98 implementation of the benchmark: https://github.com/rochus-keller/Are-we-fast-yet/tree/main/Cpp.
I have tried to apply the guidelines mutatis mutandis, and made a compromise between how a C++98 developer would have implemented it, and being faithful to the existing Java code as far as not wasting too much performance. I also tried to avoid unnecessary rewriting. Basically we can discuss all decisions; we just have to commit to something.
There is also an existing Oberon+ version and currently I'm planning for a FreePascal version.
This should be done perhaps similar to the C++ solution.
And the other languages need to be checked for consistency.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.