Discover how to harness the potential of ChatGPT for advanced keyword research in SEO with our comprehensive guide.
Many tools are out there to help you test the speed of your pages, and there are plenty of metrics that you can target. However, understanding how these optimisations work and knowing whether they’ll actually speed up your website is vital.
It can be difficult to understand page speed, so here we take a closer look at this complex subject with an advanced guide to improving your page speed.
In simple terms, page speed refers to the length of time each web page takes to load. It’s hard to give page speed a single number as numerous metrics will capture different elements of a page load in various ways, under different conditions and for a range of purposes.
Google has determined that mobile speed is a key ranking factor for its search engine algorithm, which has renewed the focus on page load speeds in recent years. However, page loading speed has actually been one of Google’s ranking factors for over a decade. It’s important because visitors want to enjoy a smooth and speedy experience without any noticeable lag or delay.
Speed also impacts analytics – if a visitor leaves your page before the analytics tag loads they aren’t recorded. Studies have also shown that increased page speed increases organic traffic levels, ads’ click to visit ratio and overall visitor numbers among other benefits, so it’s certainly an important factor to bear in mind.
There isn’t any official page loading speed threshold but it’s generally considered to be under 3 seconds since over half of all mobile visitors will leave any page which takes over three 3 seconds to load up.
So, how can you make sure that your pages all load within this timeframe? Here, we look at some strategies that you can adopt to boost your page load speed.
At one time, best practice dictated that resources were hosted on cookie-free domains separate to the primary domain, and often using several domains offered benefits due to limits on connection requests that the browser set. Since HTTP/2, however, this practice is no longer the best option. Using a single server whenever possible for requests is the new best practice to speed up load times.
If you decide that you are going to use another server, you should always Preconnect early during the page loading to the servers which contain the necessary files that give the page functionality. This will also speed the process.
Page rendering is delayed by redirects and this slows down the user experience. Every redirect will add an additional HTTP request-response round trip while also often adding extra round trips to carry out the DNS lookup, TLS negotiation and TCP handshake. The solution is to ensure your website is as responsive as possible with only a single redirect to the landing page from one given URL, or, even better, avoid any redirects at all.
If you do have to have redirects in place, select the right type to suit your needs:
When your content size is reduced, downloading the resources takes less time, the client uses less data and the page rendering time will be improved. Any content that can be compressed should be Gzipped, but for the best possible practice you should make sure that any unnecessary data has been eliminated before compression.
Speedy server response time is an absolute necessity and this means that top-quality web development couldn’t be more important to avoid CPU starvation, slowed down database enquiries, slow application logic and routing, slow libraries and slow frameworks. Google recommends that server response times should be under 200ms at all times, and by using tools to measure your page’s server response time you can work out what is slowing up the process of content delivery.
Bear in mind that even when your tests tell you that your site speed is under 200ms, older-generation Android devices that use slow 3G could have a far slower experience. To ensure these users have an optimal experience you need to aim for a SpeedIndex value of < 1250, a first meaningful paint of < 1s and transmission time intervals of < 5s and < 2s on repeat visits.
You can optimise your server response times by using HTTP/2, enabling OCSP stapling for faster TLS handshakes and by supporting both IPv4 and IPv6. You should also add resource hints as this will speed delivery wit quicker preconnect, prefetch, preload and DNS lookup.
While fetching resources, the more roundtrips are necessary between server and client means extra delays as well as greater data costs for users. This expensive and slow process can be mitigated by putting a caching policy in place that allows the client to determine when and if responses that have been returned before can be reused.
Google recommends the implementation of caching policies which answer if resources are able to be cached and, if so, who will be able to cache it and for how long. These policies should also answer how the resources can be revalidated once the caching policy has expired. A minimum one week cache time is Google’s recommendation, with up to a year recommended for all static assets. You should use Cache-Control for eliminating network latency while avoiding data charges, and use ETags for efficient revalidation.
Minification will eliminate any redundant data from resources that are delivered to visitors, and this can have a major positive impact on the performance and speed of your site overall. There should be no redundant data like space symbols or comments in your HTML code, no unnecessary image metadata or repeated styles in CSS in your web assets.
For the best results, minification should be used alongside compression, and other types of resources should have minification applied too. Video, images and other content types may be minified too. There are tools available to reduce the minifying burden such as PageSpeed Module.
Images will account for around 60% of the total size of your webpage. Larger images will slow down your site excessively. If you can optimise images, file size will be reduced without any significant negative impact on visual quality. Using relative sizes for each image is Google’s recommendation, with the picture element being used if you have to specify specific images to suit device characteristics.
You should follow these guidelines to maximise your impact:
Typically, browsers follow a five step process when rendering pages:
Pages must process CSS before they can be rendered, so if the CSS has bloating due to external stylesheets that block rendering, the process will usually require several roundtrips, delaying the length of time it takes to reach the first render. It’s best practice to directly inline small CSS into your HTML document in order to eliminate external small CSS resources. You should also avoid inlining any larger CSS files or CSS attributes.
When your above-the-fold content has exceeded the initial congestion window, loading the content requires several roundtrips for loading and rendering the content. As a result, high latencies will be caused and there will be major delays to your page loading times, particularly for mobile device users. The size of your above-the-fold content must be reduced to less than 14kB (when compressed).
You should also limit the data size required for rendering above-the-fold content by using image optimisation, resource minification and compression. Organising the HTML markup so that it renders above-the-fold content straight away will expedite the speed at which the above-the-fold content renders greatly.
It has been suggested that a page’s theoretical speed (using laboratory data) is used by Google alongside users’ real field data when trying to use the pages to determine page speed. This is similar to data used to create a Chrome User Experience Report. Since there has been no confirmation to the public of the data sources that Google uses, it’s likely that CrUX data and PageSpeed Insights are a good representatives of the data Google is using.
Once you have put in place all of the suggestions that we’ve made above, the next step is to determine the impact your speed update has had on your page load times. So, how can you go about doing this?
You can estimate the impact on speed in the simplest way by making static copies of your pages. Copying the code to the server then testing the page will give you baseline metrics. You can then make your changes to each page before testing again. This will help you to see the rough impact of each change. That means once you’ve made them on the live site you’ll have a good idea of the estimated impact.
It’s clear that you should always focus on making your website as quick for users as possible. To achieve this in the most effective way, you should choose the metrics that best represent how users experience your page’s interactivity and load and make improvements on those first.
There isn’t a defined threshold at which improving your page speeds should come to an end, but you’ll often find that you’ll reach a stage at which the potential benefits are no longer worth the costs, effort and time involved. There may also be possible negative trade-offs such as the loss of a valuable tool when you reach this point. That’s when it’s time to stop improving your page speeds.
Ideally, when it comes to maximising your page speed, the best stage to reach is when your pages load a little more quickly than those of your competitors, but as long as users can enjoy a smooth and speedy experience when they access your content, you’ve achieved your goal.