Comments (2)
This is a valid concern, but a very poor benchmark. Despite one of the inputs being named "big," they are both very small, coming out to a fraction of a megabyte of output at most. The data is also extremely flat and simple, which isn't representative of real world data at all. With data this small and simple, the performance of your JSON library largely doesn't matter and isn't ever going to be your bottleneck outside of crafted microbenchmarks. I understand that the statements in the readme sound rather confident and sweeping ("Jsonrs is faster" rather than "Jsonrs is nearly always faster in cases that matter but can sometimes be slower in crafted scenarios that never matter in the real world"), but I think it's fair to assume a bit of understanding in the mind of the reader that these statements aren't universal absolutes and have a relevant domain that they are generally true in.
For small cases like these specifically, any JSON library will probably perform somewhat similarly as any other, and nobody working with this data would be running into performance issues caused by their JSON library of choice, so I'd never expect them to find their way here in the first place to read the statements in the readme. On top of that, simple data like this is what Jsonrs's lean mode is for, which should perform better than the fat default mode of Jsonrs that you're testing.
Here is your benchmark, refactored into a single benchee run, with Jsonrs running in lean mode as an additional function under test, and with memory stats
small_map = Map.new(1..100, &{"key#{&1}", &1})
big_map = Map.new(1..10_000, &{"key#{&1}", &1})
Benchee.run(
%{
"Jsonrs.encode!" => &Jsonrs.encode!/1,
"Jsonrs_lean.encode!" => &Jsonrs.encode!(&1, lean: true),
"Jason.encode!" => &Jason.encode!/1
},
memory_time: 0.01,
inputs: %{"small" => small_map, "big" => big_map}
)
And here is how it runs on my machine:
Operating System: macOS
CPU Information: Intel(R) Core(TM) i5-3210M CPU @ 2.50GHz
Number of Available Cores: 4
Available memory: 8 GB
Elixir 1.14.2
Erlang 25.1.2
Benchmark suite executing with the following configuration:
warmup: 2 s
time: 5 s
memory time: 10 ms
inputs: big, small
[...]
##### With input big #####
Name ips average deviation median 99th %
Jsonrs_lean.encode! 505.67 1.98 ms ±6.47% 1.94 ms 2.72 ms
Jason.encode! 244.46 4.09 ms ±17.35% 3.89 ms 6.13 ms
Jsonrs.encode! 213.51 4.68 ms ±6.74% 4.59 ms 6.14 ms
Comparison:
Jsonrs_lean.encode! 505.67
Jason.encode! 244.46 - 2.07x slower +2.11 ms
Jsonrs.encode! 213.51 - 2.37x slower +2.71 ms
Memory usage statistics:
Name Memory usage
Jsonrs_lean.encode! 0.00011 MB
Jsonrs.encode! 0.84 MB - 7346.87x memory usage +0.84 MB
Jason.encode! 1.68 MB - 14674.33x memory usage +1.68 MB
**All measurements for memory usage were the same**
##### With input small #####
Name ips average deviation median 99th %
Jsonrs_lean.encode! 38.24 K 26.15 μs ±35.02% 24.71 μs 56.14 μs
Jason.encode! 35.14 K 28.46 μs ±105.13% 24.77 μs 67.04 μs
Jsonrs.encode! 21.78 K 45.92 μs ±27.02% 43.08 μs 89.09 μs
Comparison:
Jsonrs_lean.encode! 38.24 K
Jason.encode! 35.14 K - 1.09x slower +2.31 μs
Jsonrs.encode! 21.78 K - 1.76x slower +19.76 μs
Memory usage statistics:
Name Memory usage
Jsonrs_lean.encode! 0.117 KB
Jsonrs.encode! 7.96 KB - 67.93x memory usage +7.84 KB
Jason.encode! 15.52 KB - 132.40x memory usage +15.40 KB
**All measurements for memory usage were the same**
As you can see, Jsonrs in lean mode tends to run faster for me than both normal Jsonrs and Jason in this benchmark by a good margin, and the runtime difference between non-lean Jsonrs and Jason seems to be much smaller on my machine for this benchmark than yours. The memory differences also skew heavily in favor of Jsonrs, but that's not necessarily relevant to speed comparisons. Either way, I maintain that this benchmark isn't representative of useful real-world performance, and better benchmarks using real-world data should be focused on instead.
And hell, let's just do that. Jason has its own corpus of real-world data that it benchmarks against, and we can just run Jason's own benchmarks to compare it against Jsonrs after setting the encode_jobs
as such:
encode_jobs = %{
"Jason" => &Jason.encode/1,
"Jsonrs" => &Jsonrs.encode/1,
"Jsonrs (lean)" => &Jsonrs.encode(&1, lean: true)
}
Here are benchee graphs comparing runtime and memory usage of Jason, Jsonrs, and Jsonrs in lean mode, ordered from smallest to largest benchmark data size (with the last three being the only ones I would really consider "big"):
Since I first tested Jason, it has improved quite a bit in runtime performance (and not at all in memory usage). But it's still pretty solidly outperformed by Jsonrs in every single test of real world data, so I don't think the claims in the readme particularly need to be changed. The issue is very nuanced though, so I'd love to hear if you have further opinions around how claims should be worded or phrased.
from jsonrs.
@benhaney Wow, that is one elaborate response! 😃 I missed out the lean
option and yes, on my machine Jsonrs now is much faster than Jason! Quite exciting. And yes, the payloads do not represent real-world workloads, I just wanted to have a quick feeling how Jsonrc performs.
You convinced me :) Thanks for the quick answer.
from jsonrs.
Related Issues (13)
- FORCE_JSONRS_BUILD doesn't seem to work HOT 4
- Support streaming gzip encoding? HOT 3
- Upgrade Rustler to 0.28.0
- Type issue on decode!() HOT 2
- Investigate using binary-backed vector
- Inner structs don't obey defimpl Encoder rules HOT 3
- can't decode a charlist HOT 4
- Performance for 1k json HOT 2
- Update benchmarks for OTP 24 HOT 2
- Incorrect encoding / decoding of nested structs
- Unusable with Phoenix Sockets HOT 3
- Update rustler to 0.22.x HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from jsonrs.