How to Perform Web Server Performance Benchmarque?
Connaissez-vous votre temps de réponse moyen du site Web? Do you know how many concurrent users your site can handle?
Les tests de charge sont essentiels pour que les applications Web connaissent le site Web nominale. Si vous devez choisir le serveur Web, l'une des premières choses à faire est d'effectuer le test de charge et de voir lequel fonctionne le mieux pour vous.
Benchmarking can help you to decide;
- Quel serveur Web fonctionne le mieux
- Nombre de serveurs dont vous avez besoin pour traiter x nombre de demandes
- Quelle configuration vous donne les meilleurs résultats
- Quelles piles technologiques fonctionnent le mieux
- Quand votre site fonctionnera plus lentement ou en panne
Il ya plusieurs outils en ligne pour effectuer un test de résistance; however, if you are looking for an in-house solution or want to benchmark just the webserver performance, then you can use ApacheBench et alternativement certains des outils énumérés ci-dessous.
J'ai utilisé le serveur Web Apache et Nginx hébergé sur DigitalOcean pour le tester.
ApacheBench
ApacheBench (ab) is an open-source command-line program that works with any web server. In this post, I will explain how to install this small program and perform the load test to benchmark the results.
Apache
Let’s get ApacheBench installed by using a yum command.
yum install httpd-tools
Si vous disposez déjà de httpd-tools, vous pouvez l'ignorer.
Now, let’s see how it performs for 5000 requests with a concurrency of 500.
[root@lab ~]# ab -n 5000 -c 500 http://localhost:80/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests
Server Software: Apache/2.2.15
Server Hostname: localhost
Server Port: 80
Document Path: /
Document Length: 4961 bytes
Concurrency Level: 500
Time taken for tests: 13.389 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Non-2xx responses: 5058
Total transferred: 26094222 bytes
HTML transferred: 25092738 bytes
Requests per second: 373.45 [#/sec] (mean)
Time per request: 1338.866 [ms] (mean)
Time per request: 2.678 [ms] (mean, across all concurrent requests)
Transfer rate: 1903.30 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 42 20.8 41 1000
Processing: 0 428 2116.5 65 13310
Waiting: 0 416 2117.7 55 13303
Total: 51 470 2121.0 102 13378
Percentage of the requests served within a certain time (ms)
50% 102
66% 117
75% 130
80% 132
90% 149
95% 255
98% 13377
99% 13378
100% 13378 (longest request)
[root@lab ~]#
Comme vous pouvez le voir, Apache a géré 373 requêtes par seconde, et il a fallu un total de 13.389 secondes pour traiter le total des demandes.
Vous savez maintenant que la configuration par défaut peut répondre à ces nombreuses demandes.Ainsi, lorsque vous apportez des modifications à la configuration, vous pouvez refaire le test pour comparer les résultats et choisir le les meilleures une.
Nginx
Faisons le test de ce que nous avons fait pour Apache afin que vous puissiez comparer celui qui fonctionne le mieux.
[root@lab ~]# ab -n 5000 -c 500 http://localhost:80/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking localhost (be patient)
Completed 500 requests
Completed 1000 requests
Completed 1500 requests
Completed 2000 requests
Completed 2500 requests
Completed 3000 requests
Completed 3500 requests
Completed 4000 requests
Completed 4500 requests
Completed 5000 requests
Finished 5000 requests
Server Software: nginx/1.10.1
Server Hostname: localhost
Server Port: 80
Document Path: /
Document Length: 3698 bytes
Concurrency Level: 500
Time taken for tests: 0.758 seconds
Complete requests: 5000
Failed requests: 0
Write errors: 0
Total transferred: 19660000 bytes
HTML transferred: 18490000 bytes
Requests per second: 6593.48 [#/sec] (mean)
Time per request: 75.832 [ms] (mean)
Time per request: 0.152 [ms] (mean, across all concurrent requests)
Transfer rate: 25317.93 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 6 11.0 2 53
Processing: 5 19 8.2 17 53
Waiting: 0 18 8.2 16 47
Total: 10 25 17.4 18 79
Percentage of the requests served within a certain time (ms)
50% 18
66% 21
75% 21
80% 22
90% 69
95% 73
98% 75
99% 76
00% 79 (longest request)
[root@lab ~]#
WOW!
As-tu vu ça?
Nginx géré 6593 requêtes par seconde! Un gagnant.
Donc, vous voyez simplement en comparant avec deux serveurs Web, vous aurez une idée de celui à choisir pour votre application Web.
The above test is on CentOS 6.8, 64 bit. You can try multiple combinations of OS & Web Server version for the optimal results.
Don’t like ApacheBench for whatever reason? No worries, there are plenty of others you can use to perform HTTP load.
SIÈGE
SIÈGE est un utilitaire de test de charge HTTP pris en charge sous UNIX. Vous pouvez placer plusieurs URL dans un fichier texte pour charger les tests. Vous pouvez installer siege en utilisant yum.
# miam installer siège
Let’s run the test with 500 concurrent requests for 5 seconds.
[root@lab ~]# siege -q -t 5S -c 500 http://localhost/ Lifting the server siege... done. Transactions: 4323 hits Availability: 100.00 % Elapsed time: 4.60 secs Data transferred: 15.25 MB Response time: 0.04 secs Transaction rate: 939.78 trans/sec Throughput: 3.31 MB/sec Concurrency: 37.97 Successful transactions: 4323 Failed transactions: 0 Longest transaction: 1.04 Shortest transaction: 0.00 [root@lab ~]#
Décomposer les paramètres.
-q - pour l'exécuter silencieusement (sans afficher les détails de la demande)
-t - courir pendant 5 secondes
-c – 500 concurdemandes de loyer
Ainsi, comme vous pouvez le voir, la disponibilité est de 100% et le temps de réponse est de 0.04 seconde. Vous pouvez modifier le paramètre de test de charge en fonction de votre objectif.
Ali
Ali is a relatively new load testing tool to perform real-time analysis. It supports multiple platforms to install, including Docker.
Une fois installé, exécutez ali
pour voir les détails d'utilisation.
root@lab:~# ali
no target given
Usage:
ali [flags] <target URL>
Flags:
-b, --body string A request body to be sent.
-B, --body-file string The path to file whose content will be set as the http request body.
--debug Run in debug mode.
-d, --duration duration The amount of time to issue requests to the targets. Give 0s for an infinite attack. (default 10s)
-H, --header strings A request header to be sent. Can be used multiple times to send multiple headers.
-k, --keepalive Use persistent connections. (default true)
-M, --max-body int Max bytes to capture from response bodies. Give -1 for no limit. (default -1)
-m, --method string An HTTP request method for each request. (default "GET")
-r, --rate int The request rate per second to issue against the targets. Give 0 then it will send requests as fast as possible. (default 50)
-t, --timeout duration The timeout for each request. 0s means to disable timeouts. (default 30s)
-v, --version Print the current version.
Examples:
ali --duration=10m --rate=100 http://host.xz
Author:
Ryo Nakao <ryo@nakao.dev>
root@lab:~#
As you can see above, you have an option to send HTTP headers, test duration, rate limit, timeout, and more. I did a quick test on Geekflare Outils et voici la sortie ressemble.
The report is interactive and gives detailed lateinformations utiles.
Gobench
Gobench is written in Go language and simple load testing utility to benchmark the web server performance. It supports more than 20,000 concurrent users which ApacheBench non.
Apache JMeter
Jmètre est l'un des outils open source les plus populaires pour mesurer les performances des applications Web. JMeter est une application basée sur Java et pas seulement un serveur Web, mais vous pouvez l'utiliser contre PHP, Java. ASP.net, SOAP, REST, etc.
JMeter got decent friendly GUI, and the latest version 3.0 require Java 7 or higher to launch the application. You must give a try to JMeter if your goal is to optimize the web application performance.
travail
travail is another modern performance measurement tool to put a load on your web server and give you latency, request per second, transfer per second, etc. details.
Avec wrk, vous pouvez spécifier d'exécuter un test de charge avec un certain nombre de threads.
Let’s take an example of running a test for 5 minutes with 500 concurrent users with 8 threads.
wrk –t8 –c500 -d300s http://localhost
Canon automatique
Inspiré par le travail, canon automatique is written in Node.js. You can use it programmatically, through API or standalone utility. All you need is NodeJS installé comme condition préalable.
You can control a number of connections, requests, duration, workers, timeout, connection rate and offer tons of options to benchmark your web applications.
Curl-chargeur
chargeur de boucles is written in C to simulate application load, and it supports SSL/TLS. Along with the web page test, you can also use this open-source tool to perform load on FTP servers.
Vous pouvez créerate a test plan with a mix of HTTP, HTTPS, FTP, and FTPS in a single batch configuration.
httperf
L’ENTREPRISE httperf is a high-performance tool that focuses on micro and macro-level benchmarks. It supports HTTP/1.1 and SSL protocols.
If you have the expected number of concurrent users and looking to test if your web server can serve a number of a request, you can use the following command.
httperf --server localhost --port 80 --num-conns 1000 --rate 100
La commande ci-dessus testera avec 100 requêtes par seconde pour 1000 requêtes HTTP.
tsung
tsung is a multi-protocol distributed stress testing tool to stress HTTP, SOAP, PostgreSQL, LDAP, XAMP, MySQL server. It supports HTTP/1.0, HTTP/1.1, and cookies are automatically manipulé.
La génération d'un rapport est possible avec Tsung.
Conclusion
J'espère que ce qui précède benchmarking tools give you an idea about your web server performance and decide what works best for your project.
Ensuite, n'oubliez pas de monitor your website performance.