Website speed test
Find Out All the Ways to Speed Up Site Loading
Table of contents
- Website speed: the main components
- Measuring website speed
- Server optimization
- Customer optimization
- Using CDN
Everyone knows that a slow site is bad. Because of a slow site, serious problems arise in the solution of everyday tasks. Sometimes it’s just annoying. Often, slowing of the site is a breakdown, denial of service – causing people to have to wait for the site to load and instead causing them to leave. This is relevant for cases of radical slowing of the site, for example, when the page starts rendering 8-10 seconds after clicking.
Even with a relatively favorable situation with the site (with fast download speed on a wired connection to the Internet and a modern computer), delays in downloading can lead to audience loss and lower conversion rates. For example, Amazon conducted an experiment in which it found that every 100 ms (0.1 s) delay leads to a decrease in sales by 1%.
But more than half of the Internet audience today use mobile devices to access websites. So they need to be able to use slower channels and processors to access and download the site.
The third reason for the importance of the speed of the site is technical. Typically, slow sites consume an increased amount of hosting resources, which leads to additional costs. Slowing of the server part reduces the ability to handle problematic peak loads on the site.
Therefore, the speed of the site should be dealt with both from a technical and an economic point of view. In this article we will concentrate on the technical side of website acceleration.
Website speed: the main components
The speed of the site concerns two sides: client and server. To date, each of these parts is equivalent to the final result. But each with its own characteristics.
In order to understand what the loading time for a page on the site is, let’s take a look at the process. As a result, we will be able to understand where the server and client optimization capabilities are located.
The full process of downloading the site (first visit) is as follows:
- DNS-query by site name.
- Connection to the server by IP (TCP connection).
- Establishing a secure connection when using HTTPS (TLS connection).
- Request HTML page by URL and server wait (HTTP request).
- Loading HTML.
- Parsing an HTML document on the browser side, building a query queue in document resources.
- Loading and parsing CSS-styles.
- Loading and executing JS-code.
- Beginning of page rendering, execution of JS-code.
- Download web fonts.
- Loading images and other elements.
- End rendering of the page, execution of deferred JS-code.
In this process, some positions occur in parallel, some may change places, but the essence remains the same.
Server optimization deals with the stages from the first to the fourth inclusive. Steps 5 through 12 are client optimization. The time spent on each of these stages is individual for each site, so you need to get site metrics and identify the main source of problems. And here we turn to the question of how to get these metrics and interpret them.
Measuring website speed
The main question is: what needs to be measured? There are many metrics for the speed of sites, but there are not so many basic ones.
First, is time to the first byte (TTFB – time to first byte) which is the time from the start of the download process to the receipt of the first portion of data from the server. This is the main metric for server optimization.
Second, is the beginning of the page rendering (start render, first paint). The metric shows the time until the end of the “white screen” period in the browser, when the page begins to draw.
Third, is loading the main elements of the page (load time). This includes loading and interpreting all the resources for working with the page, after this mark, the page load indicator stops spinning.
Fourth, is the full loading of the page: the time before the end of the main activity of the browser, all the main and deferred resources are loaded.
These basic metrics are measured in seconds. It is also useful to have an estimate of the amount of traffic for the third and fourth metrics. Traffic needs to be known to assess the effect of connection speed on load time.
Now we need to understand how to test the speed. There are many services and tools for assessing the metrics of the speed of downloading sites, each of which is better for its task.
One of the most powerful tools is the developer panel in the browser. The most advanced functionality in the panel is in Chrome. On the Network tab, you can get metrics for the load time of all elements, including the HTML document itself. When you hover over an item, you can see how much time is spent for each step in obtaining the resource. To evaluate the full picture of the page load process, you can use the Performance tab, which gives full details up to the time of decoding the images.
If you need to assess the speed of the site without full granularity, it is useful to start an audit of the site (Audits tab), it will be conducted using the Lighthouse plug-in. In the report, we get an estimate of the speed for mobile devices (both integral in points, so according to our basic metrics) and several other reports.
To quickly evaluate client optimization, you can use the Google PageSpeed Insights service or Sitechecker (we use API of Google PageSpeed Insights). Finally, it is useful to analyze the time of downloading the site for real users. For this, there are special reports in the web analytics systems Yandex.Metrics and Google Analytics.
Landmarks for the loading time of the site are as follows: the start of the rendering is about 1 second, loading the page within 3-5 seconds. In such a framework, users will not complain about the speed of the site and the download time will not limit the effectiveness of the site. These figures should be achieved by real users, even in difficult conditions with mobile connection and outdated devices.
Let’s move on to acceleration of the site. Optimizing the server part is the most understandable and obvious measure for site developers. First, the server part is easily monitored and controlled on the side of system administrators. Secondly, with serious problems with server response time, the slowdown is noticeable for everyone, regardless of the speed of the connection or the device.
While the reasons for the slowdown of the server side can be very diverse, there are typical places to look at.
Hosting (server resources)
This is the number one reason of slowing down for small sites. For the current load of the site, there simply is not enough hosting resources (usually CPU and disk system speed). If you can quickly increase these resources, it’s worth a try. In some cases, the problem will be solved. If the cost of additional resources becomes higher than the cost of optimization work, you need to go on to the following methods.
DBMS (database server)
Here we are already turning to the source of the problem: the low speed of the program code. Often, most of the time a web application is spent on database requests. This is logical, because the task of a web application is to collect data and convert them according to a certain pattern.
Solving the problem of slow responses from the database is usually divided into two stages: DBMS tuning and query optimization and data schemes. Tuning DBMS (for example, MySQL) can increase the acceleration several times, in case the tuning has not previously been done. Thin tuning can provide a better effect within a dozen percent.
Optimizing queries and data schemas is a radical way to accelerate. With this kind of optimization it is possible to obtain acceleration by several orders of magnitude. If the change in the structure of the database can occur without intrusion into the program code of the site, then such an intervention will require the optimization of requests.
To identify slow queries, you need to collect statistics on the load on the database for a fairly long period of time. Then the log is analyzed and the candidates for optimization are identified.
Effect of CMS and program code
It is quite widely believed that the speed of the site depends only on the CMS (“engine”). Site owners often try to split the CMS into fast and slow. Actually this is not true.
Of course, the load on the server depends on the code that is included in the CMS that is in use. However, most popular systems try to optimize for maximum speed and there should not be fatal problems with the speed of the site.
However, in addition to the main CMS code, the site can also contain additional modules (plug-ins), extensions and modifications from the developers of the site. And this code can have a negative impact on the speed of the site.
In addition, speed problems occur when the system is misused. For example, the system for blogs is used to create a store. Or the system for small sites is used to develop a portal.
The most powerful and universal means of increasing server speed is traditionally caching. Here we are talking about server-side caching, and not about caching headers. If the calculation of the result (assembly of the page, block) requires significant resources, put the result in the cache and periodically update it. The idea is simple and complex at the same time: caching systems are built into programming languages, site management systems and web servers.
Typically, page caching allows you to reduce the page’s rendering time to tens of milliseconds. Naturally, in this case, the server easily experiences peaks of attendance. There are two problems here: not everything can be cached and the cache must be properly disabled (discarded). If problems are solved, caching can be recommended as an effective means of server acceleration.
Optimization of TCP, TLS, HTTP/2
In this part, we combined the subtle network optimizations that provide server acceleration. The effect here is not as large as in other methods, but it is achieved exclusively by settings, that is, free.
Tuning TCP today is required for large projects and servers with a connection from 10G, the main thing to remember: the network subsystem is regularly updated with the release of new Linux kernels, so it’s worth updating. Correctly configuring TLS (HTTPS) allows you to obtain a high level of security and minimize the time to establish a secure connection. Good recommendations are released by Mozilla.
The new version of the HTTP protocol – HTTP/2 is designed to speed up the download of sites. This protocol appeared recently and is now actively used (about 20% of the share among websites). In general, in HTTP/2, the mechanisms of acceleration are actually laid, the main one is reducing the effect of network delays on page load time (request multiplexing). But the acceleration due to HTTP/2 is not always successful, so do not rely on this protocol.
Unlike server optimization, the client is aimed at everything that happens in the user’s browser. Because of this, control is complicated (different devices and browsers) and many different optimization directions arise. We will look at the most effective and universal methods that can be used in almost any project.
Critical path optimization: CSS, JS
Critical path of rendering (critical rendering path) – a set of resources for starting the page rendering in the browser. Typically, this list includes the HTML document itself, CSS-styles, web fonts and JS-code.
Our task as speed optimizers is to shorten this path both in time (taking into account network delays) and in traffic (to take into account slow connections).
The easiest way to determine the critical path is to launch an audit in Chrome (in the Developer Panel), the Lighthouse plug-in determines its composition and boot time, taking into account the slow connection.
The main technique in reducing the critical path: we remove everything that is not necessary or can be postponed. For example, most of the JS code can be deferred before the page loads. To do this, place the JS resource call at the end of the HTML document or use the async attribute.
For delayed loading of CSS it is possible to use dynamic connection of styles through JS (waiting for the domContentLoaded event).
Optimizing Web Fonts
Connecting web fonts today has become almost the standard in design. Unfortunately, they negatively affect the speed of page rendering. Web fonts are additional resources that you need to get before you start drawing text.
The situation worsens because often pointers to font files are buried in a CSS file, which also does not come instantly. Many developers like to use public web font services (for example, Google Fonts), which causes even more delays (additional connections, CSS file).
The optimization rules consist in reducing the size of web font traffic and getting them as quickly as possible.
To reduce traffic, you need to use modern formats: WOFF2 for modern browsers, WOFF for compatibility. In addition, you need only include those character sets that are used on the site (for example, Latin and Cyrillic).
To influence the rapid display of web fonts, you can use the new link rel=”preload” specifications and the CSS property font-display. Preload will allow you to tell the browser as soon as possible about the need to download a font file, and font-display provides a flexible way to control the behavior of the browser in the event of file delay (wait, draw a spare, do not wait for a font more than three seconds)
Images are the majority of the weight of a modern site. Of course, pictures are not always as critical for a page as CSS and JS code. But for many sites, images are an important part of the content: remember any product card in the online store.
The main technique for optimizing images is to reduce their size. To do this, use the correct format and compression tools:
- PNG for images with transparency and text;
- JPEG for photos and complex images;
- SVG for vector graphics.
In addition to these formats, new ones are being developed: for example, WebP from Google. This format can cover the area of use of PNG and JPEG – supports lossy and lossless compression, transparency and even animation. To use it, it’s enough to create a copy of the images in WebP and give them to the browsers that support them.
For PNG, there are many optimization utilities that can be used to reduce the size, for example, OptiPNG, PNGout, ect and others. Also, internal optimization of data compression can be performed using zopfliPNG. The main idea of such software is in selecting the optimal compression parameters, removing unnecessary data from the file. You need to be careful here: some utilities have a mode with loss of quality, which may not suit you (if you expect the same image to be output).
Optimization of JPEG is also divided into two types: lossy and lossless. In general, you can recommend the Mozilla JPEG package, which is specially designed for better compression in this format. For lossless optimization, you can use jpegtran, with losses – cjpeg.
This is the most simple method of client optimization. Its purpose is in caching the browser of rare resources: images, CSS and JS-files, fonts, sometimes even the HTML document itself. As a result, each resource is requested from the server only once.
If you are using Nginx, just add the directive:
add_header Cache-Control "max-age=31536000, immutable";
From now on, the browser has the right to cache resources for up to a year (which is almost forever). The new parameter “immutable” indicates that the resource is not going to be changed.
Of course, the question arises: what if we need to change the cached resource? The answer is simple: change its address, URL. For example, you can add a version to a file name. For HTML documents, this method is also applicable, but, as a rule, a shorter period of caching is used (for example, one minute or one hour).
Compulsory practice is the compression of any text data when transferred from the server to the browser. Most web servers have a gzip-compression implementation of responses.
However, simple compression activation is not enough.
First, the compression ratio is adjustable and should be close to the maximum.
Secondly, you can use static compression, that is, pre-compress files and put on disk. Then the web server will search for the compressed version and immediately give it away. Thirdly, you can use more efficient compression algorithms: zopfli (compatible with gzip) and brotli (new compression algorithm). Brotli will only work with HTTPS. Since these algorithms (especially zopfli) are costly to compress, we always use them in the static version.
To maximize the effect of compression on files, the minification process is preliminarily applied: cleaning up unnecessary translations of strings, spaces and other unnecessary characters. This process is specific to each format. Also, you should take care of compressing other text data on the site.
The application of CDN (content delivery network) for website acceleration is a very advertised measure, having a lot of marketing hull around the essence of technology.
Initially, CDNs were designed to offload the Internet channels of broadcast media sites. For example, when watching a live video, several thousand viewers create a very heavy load on the server’s bandwidth. In addition, to ensure uninterrupted quality of communication with large client and server removal is extremely difficult (due to delays and instability of the network).
The solution to this problem was to create a CDN, that is, a distributed network to which clients (for example, viewers) were connected, and the hosts of this network are already on the server (origin). At the same time, the number of connections to the server was reduced to one (several), and the number of connections to the CDN could reach millions due to the caching of content by the network.
Today, most CDNs position themselves as a means of speeding up websites, primarily by reducing the distance from the content to the client (the site visitor).
How can I speed up a site using CDN?
Yes, indeed the user connects, as a rule, to the nearest (by access time) network server and therefore processes the establishing of TCP and TLS connection faster. Further, if the content is on the CDN server, the user can quickly receive it. Thus, the load on our own server is reduced.
Secondly, CDN can not just distribute content without changes, but optimize it on its side and give it in a more compact form: compress images, apply compression to the test, etc. Due to such optimizations, you can get a shorter download time.
Disadvantages of using CDN
Disadvantages, as usual, continue the advantages: the object may not be in the CDN node cache. For example, it has not yet been requested or can not be cached (HTML document). In this case, we get additional delays between the CDN node and our server.
Despite the fact that CDNs are designed to speed up access to the site, there are situations when the network route will be less optimal than without the CDN.
Finally, content delivery networks are very complex systems, where crashes, instability and other problems are also possible everywhere. Using CDN, we add one more level of complexity.
We fixed the result
Let’s say you managed to achieve good site speed. Users and owners of the resource are happy. Does this mean that you can forget about the issue of speed? Of course not. To achieve continued quality of the site, you must constantly maintain the site and monitor it.
Any live web project is regularly updated, changes occur both in common templates (themes of design, interfaces), and content. Also, the program code (both client and server) is actively changing.
Each change can affect the speed of the site. To monitor this impact, you need to implement a system of synthetic site speed monitoring at the development stage. Thus, speed problems can be intercepted before users notice them.
To optimize the incoming content, integration of optimizing procedures into the content management system is required. First of all, this concerns image processing.
Acceleration of sites is a very dynamic area: new standards are emerging, their support by browsers is changing. Therefore, it is important to regularly audit the technology of the project, processes and software used.
Monitoring of real user speed
Synthetic testing in ideal laboratory conditions is very useful for assessing changes in the system code, but it is not enough. In the end, we want the site to work fast for real users. To collect such data there is monitoring of the speed on the user side (RUM – real user monitoring).
To organize RUM, it is enough to connect one of the web analytics systems (Yandex.Metrica, Google Analytics) and see reports on the time of the site download. For more detailed and accurate data, you can use specialized speed monitoring services.
The theme of site speed is extensive and affects many aspects of developing and supporting a web application: from server code to content. This means that getting good results is impossible without involving the development team.
The most important thing: keep the users in mind, take into account the various conditions for using the site. Acceleration of the site is a process that occurs with different intensity throughout the life cycle of the project.
Get Free Website SEO Score Online
Improve your SEO rating with the best website checker
Every text on the web is created for some purpose. An effective text is the one that performs the necessary tasks. To achieve your goals, we recommend performing SEO content analysis at the several
E-commerce retailers are generally skeptical when they are told that they can double sales and traffic to their site by applying SEO best practices.
Meta robots tags are used to pass instructions to search engines on how they should index or crawl sections of a website. This article gives an overview of the different values that can be used in a
If you understand how to use schema.org, but do not dare to mark pages up because of the complexity of the process, this article is for you. There is an effective and easy-to-use alternative - the
All search engines, including Google, have problems with duplicate content. When the same content is shown at numerous locations on the internet, a search engine can’t determine which URL should be
All the painstaking site optimization work can be crossed out by technical mistakes that will interfere with: the project's ability to be fully indexed, the search robot's ability to understand the