|
From: | lingweicai |
Subject: | [libmicrohttpd] Help : Any HTTP Performance Benchmark Comparison for Microhttpd with Go HTTP Servers |
Date: | Mon, 26 Jun 2023 08:52:25 +0800 (CST) |
Hi Evgeny:
Thanks for your reply and the information. I setup my environmement to run the benchmark with "wrk -t 4 -c 1000 -d 10s" for go and microhttpd.
I configured libmicrohttp with epoll and tls. and run the benchmark with src/example/benchmark, the results are :
[root@oe23 examples]# wrk -t 4 -c 1000 -d 10s http://localhost:80/
Running 10s test @ http://localhost:80/
4 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 9.11ms 12.81ms 240.35ms 94.05%
Req/Sec 19.52k 4.15k 31.42k 66.75%
779860 requests in 10.10s, 110.07MB read
Requests/sec: 77221.73
Transfer/sec: 10.90MB
I also install golang, and run hello test like ( HTTP Server Benchmark (ssut.me) ), the results :
[root@oe23 ~]# wrk -t 4 -c 1000 -d 10s http://localhost:10000/
Running 10s test @ http://localhost:10000/
4 threads and 1000 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 5.98ms 5.81ms 75.36ms 84.23%
Req/Sec 47.99k 16.21k 104.91k 73.35%
1907367 requests in 10.07s, 232.83MB read
Requests/sec: 189439.19
Transfer/sec: 23.12MB
The microhttp results are too slow. I looked the code of src/microhttpd/test_client_put_stop.c, but I dont know how to build it to a benchmark for a test with wrk.
any suggestions to build a "hello world" with Microhttp to compete with go in "wrk" ? though the results of "ab" of microhttp is better than go run "ab", as more benchmark is using "wrk", less mentioned in "ab", for example, "ab" is not on the list Top 10 HTTP Benchmarking and Load Testing Tools (thechief.io)
Thanks,
Forrest
----- 原始邮件 -----
发件人:Evgeny Grin<k2k@yandex.ru>
发送时间:2023-06-21 01:39:02
收件人:<libmicrohttpd@gnu.org>
主 题:Re: [libmicrohttpd] Any HTTP Performance Benchmark Comparison for Microhttpd with Go/Nodejs HTTP Servers
Hi Forrest,
The real benchmark-testing of web-servers is a tricky thing.
You should consider:
* real-live scenarios. Ideally it should be run over the real network,
because the loopback traffic is handled differently by kernels.
* very fast client. Ideally very dumb. Otherwise you may test your
client performance, not the server performance.
* Multiple scenarios instead of single one: small/large requests with
small/large responses (4 combinations), single request per connection /
multiple requests per connections. They all have real-world usage and
they may have different levels of performance.
* Measure different parameters: the latency of the first response, the
latency of the next responses, the total throughput in terms of number
of requests per second and number of bytes per second. They are all
im_portant for different users.
* Test with single thread of the requests and with multiple threads of
the requests.
* Check amount of used system resources. For some users they could be
critical.
* Use different compiler and compiler settings (when applicable). The
'-O3' may have worse speed compared to '-O2'. LTO typically improve the
results, but could have negative effect as well.
* Analyse measured data not only be averaging, but also use
95-percentile for min/max values.
* Repeat the same tests on several platforms, for example on GNU/Linux,
FreeBSD and W32.
For the client I have the initial framework partially implemented in
src/microhttpd/test_client_put_stop.c, however it does not ready for the
response testing.
The article has a link to third-party MHD mirror. The the moment of
article date the mirror had 5 years old version. Currently MHD is
optimised heavily to minimize amount of system calls (which are costly
with context switching).
The minimal example used in test is optimised for simplicity, not for
the performance.
Unfortunately I don't have ready-to-use recipe for the proper testing. :)
--
Evgeny
On 20.06.2023 17:50, lingweicai via libmicrohttpd wrote:
did not show good
[Prev in Thread] | Current Thread | [Next in Thread] |