Comments (8)
I am creating benchmarks since 2008, and since that time I have seen almost everything.
Some testing with very different size responses for the same body, other testing and say that is faster, but in reality they are testing a 502 from the sever, not from the framework, etc etc etc
I'm the first that want a fair benchmark, and that all use the best tools for that, without make tricks.
But I say you that if one docker image is slower, the best thing is to create an issue to the creators of the dockerfile, not in the benchmark.
The people that see the benchmarks don't know which docker image is using.
Use the benchmark as a tool to see problems in your framework or toolset. And inform about problems to create a faster and healthy ecosystem to your framework or/and language.
Please don't use the benchmark only like a competition of results numbers.
Thanks.
from frameworkbenchmarks.
Sorry, but Distmod still never run with the Citrine Servers. The PR is still not merged.
If you made your test in local, possibly will be very different numbers that with the Citrine Servers.
And you are welcome to change any framework benchmark, to have better results.
from frameworkbenchmarks.
@joanhey, it is not entirely clear what you wanted to say in this comment. Granted, my local tests won't be the same as on Citrine Servers, but what does that change? Do you think the tests will change disproportionately?
As for the fact that I can change any test to improve it... The change I proposed is probably not to everyone's liking, so changing the rules should be done by the owner of these tests in the first place.
from frameworkbenchmarks.
I can assure you that the results can change a lot in local vs Citrine servers.
Also the results in my pc, can be very different than in your pc.
The benchmark need to be always run with the same servers.
Any small config change in the server, will change a lot of results.
from frameworkbenchmarks.
Absolutely agree. Just what does it change in the context of my issue? Do you doubt that frameworks using different versions of docker images will show different results?
from frameworkbenchmarks.
No, but i'll be not a big change.
And if it's exist a big difference, perhaps is better to create an issue in the creators of the docker image.
I know form PHP, that using the official PHP docker images, a lot of times is a lot slower than use a normal Ubuntu image and download the required packages to run PHP.
But a benchmark is for that, to show discrepancies. The problem is not in the benchmark it's in the docker file.
And the correct thing is create an issue to the docker file, not in the benchmark.
from frameworkbenchmarks.
Tests should have the same initial conditions for all frameworks as much as possible. As you can guess, it is not a big problem for me if my framework shows better results (among Node.js frameworks). But I want a fair competition. My proposed changes add precision to the results of this competition.
from frameworkbenchmarks.
We've discussed this a lot in the past and it's not something we plan on requiring at this time. There are reasons why framework maintainers may want their framework running on a specific version of a language/engine/etc and we leave it up to those maintainers to decide.
from frameworkbenchmarks.
Related Issues (20)
- ASP.NET Core platform Json and Plaintext HOT 4
- Round 22 results site shows "woo" test with Racket in the language column whereas it is written in Common Lisp
- New execution mode "profiling" HOT 6
- Enhancement request: disable pg_stat_statements when running anything but validation
- PHP 8.3 update [info]
- Inconsistent composite score best score computation HOT 2
- Holiday Break HOT 8
- Where to find the exact code that was used for Round22? HOT 1
- Expired SSL `tfb-status.techempower.com` HOT 1
- Actix failing build HOT 9
- New Server Set up HOT 57
- Most of the best-performing frameworks don't survive temporary db connectivity loss HOT 25
- [IMPORTANT] To further test fairness. Please consider this for next benchmark tests. HOT 1
- gRPC Framework benchmark HOT 1
- Please upgrade hardware for extreme testings HOT 1
- Increase concurrency for future rounds to better showcase performance on large servers HOT 2
- Randomize Framework order in runs HOT 11
- Add GraalVM native build for Quarkus HOT 3
- Add container name in tfb for local metrics HOT 1
- Create benchmarks using seastar HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from frameworkbenchmarks.