Comments (15)
I am doing testing on my desktop (Ubuntu 16.04, i5 CPU with 32GB memory on Java 8). I am totally with you on moving to docker containers. Currently thinking to dockerize each of them into individual containers with wrk inside. Do you have any better idea?
from light-example-4j.
We need better explanation of results, at least define what every digit in result means.
Also i found that some people create wrk2 https://github.com/giltene/wrk2
Currently thinking to dockerize each of them into individual containers with wrk inside.
We can try docker-compose, so then wrk would live in separate container. But we should test impact of docker to our tests.
from light-example-4j.
@IRus I totally agree. I will write up something when time is permitted. The test is to gauge the raw throughput and latency of each framework on a very simple response ("Hello World!") without network limitation involves. I am guessing that docker-compose might impact the performance number a little bit as traffic goes through docker network although on the same docker host. Need to test it out on both approaches.
The wrk2 looks pretty good. Thanks for the link.
from light-example-4j.
I just create docker image for wkr2
it can be used this way:
docker run --net=host irus/wrk2 -t 4 -c 128 -d 30 --rate 1000 http://localhost:8080
from light-example-4j.
from light-example-4j.
Other tools:
Apache Bench
Apache JMeter
httperf
Personally i used JMeter and ab in past. But i can't compate they with each other and wrk/wrk2.
from light-example-4j.
I have used both AB and JMeter. They are not designed to work with high performance microservices as they can only generate less than 100K request per second on a commodity hardware. wrk is the most efficient tool to generate enough load without hogging all CPUs.
from light-example-4j.
@IRus I think that memory and cpu limits should be specified
from light-example-4j.
@cortwave they can be specified via arguments.
--memory="1g" --cpuset="0-3"
Something like this for 1GB Ram and 4 CPUs
from light-example-4j.
Docker vs host
yoda@ux32vd:18:27:~/dev/GitHub_IRus/wrk2 (master)$ ./wrk -t 4 -c 128 -d 30 --rate 1000000 http://localhost:8080
Running 30s test @ http://localhost:8080
4 threads and 128 connections
Thread calibration: mean lat.: 5077.301ms, rate sampling interval: 17072ms
Thread calibration: mean lat.: 3912.209ms, rate sampling interval: 16326ms
Thread calibration: mean lat.: 4107.606ms, rate sampling interval: 16482ms
Thread calibration: mean lat.: 4988.354ms, rate sampling interval: 16990ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 16.42s 6.93s 28.44s 65.53%
Req/Sec 22.21k 1.71k 24.42k 50.00%
2574749 requests in 30.00s, 260.28MB read
Requests/sec: 85831.26
Transfer/sec: 8.68MB
yoda@ux32vd:18:28:~/dev/GitHub_IRus/wrk2 (master)$ docker run --net=host irus/wrk2 -t 4 -c 128 -d 30 --rate 1000000 http://localhost:8080
Running 30s test @ http://localhost:8080
4 threads and 128 connections
Thread calibration: mean lat.: 4568.203ms, rate sampling interval: 17104ms
Thread calibration: mean lat.: 4686.661ms, rate sampling interval: 16449ms
Thread calibration: mean lat.: 3811.947ms, rate sampling interval: 15990ms
Thread calibration: mean lat.: 4236.410ms, rate sampling interval: 15024ms
Thread Stats Avg Stdev Max +/- Stdev
Latency 16.36s 6.18s 29.15s 64.90%
Req/Sec 18.06k 2.60k 21.60k 50.00%
2163230 requests in 30.00s, 218.68MB read
Requests/sec: 72109.14
Transfer/sec: 7.29MB
from light-example-4j.
This is expected as requests have to go though docker network which added another layer. Is your first run using wrk2?
from light-example-4j.
Sure, i compiled wrk2 (there are few warning, but looks good).
I also will try wrk in containers, and without it.
from light-example-4j.
I trying to limit container with wrk2 (server running in host), but it works fine(i mean it still pretty fast, so result doesn't changes) even with 50mb of RAM, and one CPU. I don't know how to limit CPU performance in container, and i think that this is impossible actually :) Maybe virtual machines can help for limiting CPU. So my conclusion that limiting wrk2 doesn't make much sense.
Upd. Why we actually want limit wrk2? I think it doesn't make sense at all, we can limit server, but because of every machine have different CPU it doesn't help too much anyway.
from light-example-4j.
These limitations will only work in certain scenarios on certain OS. For Java it is very complicated. I was trying to gauge memory usage and couldn't find any reliable way to do so.
from light-example-4j.
This task is still in our pipeline but as the benchmark has been moved to its own repo, we are going to trace it there.
from light-example-4j.
Related Issues (20)
- update tableau client for local, tableau and google calls in the same method
- update market api to remove unused config file and upgrade dependency
- disable the metrics for petstore-maven-single
- add log application for springboot
- expose all actuator endpoints from spring boot log API
- update market API with the latest light-4j version
- regenerate petstore-maven-multiple with the latest light-codegen
- update proxy-backend to sync with the latest codegen
- update cors to sync with the latest codegen
- remove rest petstore-maven-single-upgrade
- update the kafka-streams-dsl configuration
- update client pool test dependency and add post request
- add kafka-sideca kafka-streams-processor-api
- add pre-commit-config.yaml and appy it
- change logger-config to logger-handler in pom.xml
- add lambda-zip and lambda-native examples
- upgrade http-client to 1.0.11 for java11-client
- remove lambda-native examples as we have lambda-petstore and lambda-market repos
- upgrade aws lambda dependencies
- update petstore-maven-single client and server keys and certs
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from light-example-4j.