The title of this page is "performance", however, within that single term multiple meanings can be found.
Scalability
Scalability refers to a systems ability to handle more concurrent requests. It does not refer to speed and so strictly speaking it is only partially related to performance. It is likely that a more performant system will be able to scale better because it can dispatch requests more quickly, however, scalability is all about being able to replicate that performance across more servers.
On the server side, Drupal is not known for being all that performant. Page loads are on the slowish side although that certainly has been improving with the speed ups we have seen in PHP in recent versions. Drupal sites do feel much snappier on the backend these days. That aside, the Drupal ecosystem has ensured that it scales well using a number of techniques, mostly based around caching of one sort or another:
- Entity caching enables entities to be loaded much faster, avoiding the heavy overhead of loading nodes and the various fields they may support.
- Page caching enables entire pages to be served quickly based on the user role accessing them.
- Caching values in RAM, rather than on disk, also improves things with data being served much more efficiently from the backend.
- Caching in reverse proxies such as Varnish and the serving of assets from CDNs enables fast delivery of sites to users all around the world.
- Sophisticated cache invalidation algorithms ensure that data stays cached for as long as possible.
Drupal has therefore evolved a number of techniques which make it a scalable enterprise platform. Mature hosting platforms such as Pantheon, Acquia and GovCMS (and others) ensure that these caching layers are in place and configured properly. A scalable system is within the easy reach of most Drupal developers and implementors.
Performance
It is of course very important to have performant code on the backend. If you have problems in the algorithms you are using, throwing more hardware at the problem is not going to solve the problem. Well written code and the appropriate use of indexes can lead to improvements of many orders of magnitude.
That is all on the server side.
Assuming that we have a well performing website, the real place users see performance is in the clientside. On the clientside there is the issue of latency and the loading of assets. If the HTML and associated assets have not been assembled correctly the page will feel slow to users. This is one area where a well designed and implemented site will make all of the difference.
Lighthouse
Developers have access to a very handy tool called Lighthouse, which is able to report on a site's clientside performance. It measures, amongst other things:
- page performance
- SEO
- accessibility and
- best practices.
Optimising these scores is a solid way to improve the site because these scores will be used by Google in measuring the quality of a site and will most likely feed back into their search ranking algorithms. Having a great performing site on the frontend therefore will have a tangible benefit for your users and your SEO rankings.
The Morpht approach
Morpht pays particular attention to the performance score provided by Lighthouse. It is generally not possible to attain perfection here but it is worth trying. For each site build we will:
- Use responsive images to ensure they are correctly sized for the device.
- Size images appropriately to ensure too much data is not being sent.
- Aggregate assets such as CSS and JS to reduce the number of requests.
- Use protocols such as HTTP 2 to lead to more efficient download of assets.
- Avoid using too many Javascript libraries.
- Avoid code which blocks the display and interactivity of the page.