WordPress Hosting : Best webserver for high performance on high traffic in 2017

Performance of the website is one of the key to success of the website and specially with growing users traffic on the website make it vital part. In recent years, demand for wordpress platform grows rapidly but there are hardly  optimized servers for the best performance and security possible when you expect thousands of users on your website. One of our client(news corp) was looking for performance improvements for their siteand consulted us for the best possible solution.

There is a huge number of open source and commercials web servers out there. Apache is the most common web servers in use today but performance of the server on high load is pathetic with default php module. We investigated lots of web servers and finalize some of them for our final decision. We picked up open litespeed, nginx and apache+nginx(Combo) for the performance load testing and sharing the results in this blog.

nginx-litespeed-nginx+apache

Testing Environment:

Inspite of testing hello word or some static files, we have used the the latest wordpress and imported test data have real-time experience. We have used the VPS serverswith following configurations:

  • Intel Xeon E5-2620 (4 vcpu)
  • 4GB RAM
  • Apache Benchmark Tool
  • 100 concurrent connections for 1000 requests

Performance Optimization:

We have optimized each of the server using our in-house high performance tweaks .On all servers, php platform (php-fpm), caching(APC) and database were fine tuned for high performance and availability. We have installed nginx on http://nginx.ipragmatech.com, open litespeed on http://litespeed.ipragmatech.com and nginx+apache(combo) on http://acehost.ipragmatech.com (We used nginx as reverse proxy and used apache+php-fpm+fastcgi to process php file).

 

Performance comparison of best servers for wordpress

Conclusion:

As per our benchmark nginx+apache performs nearly 15% better than the open litespeed and nearly 10% better than nginx. We think with the optimization on php platform and fine tuning apache resulted in better performance. Performance of the open litespeed seems better when there is less load on the site but once the load increases the performance degraded. The nginx+apache(combo) wins the race on higher user load and hence we have chosen it to provide our optimized wordpress hosting in collaboration with Acehost. These results may vary based on hardware configurations on dedicated or vps plans.

Apache Benchmark Testing Data

[mt_tabs] [mt_tab title=”Nginx+ Apache”]
Server Software: apachenginx
Server Hostname: acehost.ipragmatech.com
Server Port: 80

Document Path: /
Document Length: 56833 bytes

Concurrency Level: 100
Time taken for tests: 48.881 seconds
Complete requests : 1000
Failed requests: 8
(Connect: 0, Receive: 0, Length: 8, Exceptions: 0)
Write errors: 0
Non-2xx responses: 1
Total transferred: 57126998 bytes
HTML transferred: 56550211 bytes
Requests per second: 20.46 [#/sec] (mean)
Time per request: 4888.051 [ms] (mean)
Time per request: 48.881 [ms] (mean, across all concurrent requests)
Transfer rate: 1141.32 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 8 9 2.2 8 35
Processing: 2233 4820 451.5 4862 8046
Waiting: 2216 4803 451.0 4845 8029
Total: 2242 4829 451.6 4871 8057

Percentage of the requests served within a certain time (ms)
50% 4871
66% 4920
75% 4951
80% 4973
90% 5033
95% 5128
98% 6309
99% 6455
100% 8057 (longest request)
[/mt_tab]

[mt_tab title=”Ngnix”]
Server Software: nginx/1.4.3
Server Hostname: nginx.ipragmatech.com
Server Port: 80

Document Path: /
Document Length: 59881 bytes

Concurrency Level: 100
Time taken for tests: 49.159 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 60301000 bytes
HTML transferred: 59881000 bytes
Requests per second: 20.34 [#/sec] (mean)
Time per request: 4915.853 [ms] (mean)
Time per request: 49.159 [ms] (mean, across all concurrent requests)
Transfer rate: 1197.91 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.8 0 4
Processing: 1481 4833 764.6 4884 8839
Waiting: 1061 2645 590.8 2609 6523
Total: 1481 4834 764.7 4884 8843

Percentage of the requests served within a certain time (ms)
50% 4884
66% 4947
75% 4979
80% 4999
90% 5072
95% 5181
98% 7335
99% 8624
100% 8843 (longest request)
[/mt_tab]

[mt_tab title=”Open LiteSpeed”]
Server Software: LiteSpeed
Server Hostname: litespeed.ipragmatech.com
Server Port: 8088

Document Path: /wordpress/
Document Length: 59604 bytes

Concurrency Level: 100
Time taken for tests: 48.549 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 60003000 bytes
HTML transferred: 59604000 bytes
Requests per second: 20.60 [#/sec] (mean)
Time per request: 4854.911 [ms] (mean)
Time per request: 48.549 [ms] (mean, across all concurrent requests)
Transfer rate: 1206.96 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 1 2.2 0 9
Processing: 3604 4823 1388.4 4414 9243
Waiting: 2212 3014 1382.2 2590 7330
Total: 3604 4824 1390.5 4414 9252

Percentage of the requests served within a certain time (ms)
50% 4414
66% 4488
75% 4543
80% 4578
90% 8705
95% 8948
98% 9054
99% 9118
100% 9252 (longest request)
[/mt_tab]
[/mt_tabs]

WordPress Hosting : Best webserver for high performance on high traffic in 2017 was last modified: July 7th, 2017 by Kapil Jain
7 replies
  1. Wally Day
    Wally Day says:

    Interesting results. But, I wonder about the base setup of the WP site. How many plugins? I assume premium theme.

    Have you done the same tests using a bare bones site, and then scaled it up to see performance gains/losses?

    Reply
    • Kapil Jain
      Kapil Jain says:

      Hey Wally,

      Thx for taking your time & energy to read this article. We have used very minimum plugins(quick cache) and used the wordpress default theme. We took the test data from http://codex.wordpress.org/Theme_Unit_Test. We have tested it with some data on site rather than a bare bone site as we feel that without any test data results won’t be realistic.This is the reason we choose wordpress for load testing on these servers.

      Feel free to contact us if you have any question.

      Reply
  2. Michael
    Michael says:

    Hi,

    (Disclaimer: I work for LiteSpeed technologies…)

    I’m a little confused as to how you came to your conclusions. There seems to be very few differences in the results, but, if anything, OpenLiteSpeed was faster than the other competitors:

    OpenLiteSpeed served more requests per second than nginx or Apache + nginx — 20.60 vs. 20.34 and 20.46
    OpenLiteSpeed also had less time per request — 48.549ms per request vs. 49.159 and 48.881

    I’m really curious as to which piece of data your 15% and 10% faster conclusions are based on…

    It might also be noted that the nginx + Apache setup had 8 failed requests.

    Cheers,

    Michael

    Reply
    • Kapil Jain
      Kapil Jain says:

      Hey Michael,

      You are right that there is very little difference in the results for requests/second or time per request but if you look at the longest time taken then Openlitespeed took 9252ms vs 8843ms(ngnix) vs 8057ms(apache+nginx) but its performance starts degrading on high load that is depicting in the graph. Openlitespeed perform better till ~900 requests but drastically shoot up response time as the number of requests increases.

      Regarding the failed requests, its the content length that failed, no exceptions.

      There is no doubt performance of litespeed is better initially and that’s the reason you see better requests/sec and time/request but it clearly degraded as the number if request increases.

      If you feel the performance shall be better if we tweak the openlitespeed configurations then feel free to send me settings at [email protected](FYI I have used the optimization settings as mention in one of the article). We shall use that settings and run these test again (with more number of requests) to evaluate which server perform better 🙂

      Reply
      • Michael
        Michael says:

        Howdy Kapil,

        Thanks for bringing this up again on vpsBoard. I had forgotten to respond.

        I believe you are reading your table incorrectly. Your table seems to be a representation of the spread of the times requests took, i.e. this data:

        Percentage of the requests served within a certain time (ms)
        50% 4414
        66% 4488
        75% 4543
        80% 4578
        90% 8705
        95% 8948
        98% 9054
        99% 9118
        100% 9252 (longest request)

        It has no relation to increasing concurrency, as concurrency was kept the same throughout your test (at 100 connections). I do not think that the x-axis is time, but rather that these requests were ordered by the length of time they took. That is why, on the table, requests only get slower as you move along the x-axis.

        What the table does show is that OpenLiteSpeed’s slowest requests were slower than the other setups’ slowest requests. (This may be where your 15% difference number is from.) However, the table also shows that the majority of requests served by OpenLiteSpeed (about 75%?) were served faster than the other setups. Also, on average, OpenLiteSpeed served requests marginally faster than both other setups, as noted by the average requests per second and median connection times data.

        I’m not sure if I’ll be able to find someone with time to look at what can be tweaked on your settings, though that might be interesting. I’ll definitely see if there’s someone here interested in taking that on.

        Cheers,

        Michael

        Reply
        • Kapil Jain
          Kapil Jain says:

          Hey Michael,

          Thx for replying though I expected your reply a bit earlier :-).

          There is no miscalculation or any confusion in the final verdict. There are two set of parameters for the conclusion which are longest time taken by the servers and the progressive performance (response time)/(number of requests) as clearly shown in the graph. You can clearly see that at ~900 requests, the openlitespeed response time shoot up exponentially while other servers are stable at that point though later they also shoot up.

          You could be right as average requests/sec is handled by openlitespeed seems marginally better because it was serving better for first 80-90% of the requests but it start degrading once the requests reaches closer to the 900 requests. I am sure if we had more test data requests(may be 2000 request) then openlitepseed performance data would be worse than the current average request/sec.

          No problem, if there is no one available for checking the settings on the server but we have used the best configurations for the openlitespeed from your documentation. If you can find someone then it would be good and more people can take advantages from the openlitespeed if they could perform better.

          Feel free to contact us if you have any question.

          Cheers,
          Kapil

          Reply
          • Michael
            Michael says:

            Howdy Kapil,

            Yeah… I meant to get back to you before, but got involved with other things on our site, etc. and it slipped my mind.

            I still think you’re misinterpreting the graph. I think that if you ran the same test with 2000 requests and then graphed it the way you have here, you would find the degradation at about 1800. This is because the graphs shows not a degradation at around the 900th request, but rather that OpenLiteSpeed serves 90% of requests very, very quickly and 10% a little slower. I think these percentages would be replicated with a larger sample.

            In my experience (and I could be wrong), I have never seen a web server that linearly degrades as it receives requests (as all of these graphs seem to show). This leads me to believe that the graphs do not show performance over time, but rather the spread of performance from fastest to slowest requests. I, of course, have seen cases where web server performance degrades as traffic (concurrency) increases (especially with Apache), but your ab settings have kept concurrency controlled throughout the test.

            A quick test with more requests would show whether I’m right, though I don’t know if you have the setups ready to go now (and I’ve found that benchmark test results can vary widely from one time to another, especially with a small amount of requests).

            Cheers,

            Michael

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *