Discover how to harness the potential of ChatGPT for advanced keyword research in SEO with our comprehensive guide.
Sometimes known as PSI for short, PageSpeed Insights is a report about how a page performs on both desktop and mobile devices. It also offers helpful suggestions about how the page could be improved.
PageSpeed Insights supplies both field and lab data about the page in question. Both types of data can be extremely useful in different ways. Lab data helps when it comes to resolving performance issues since the data is collected in an environment that is controlled.
Nevertheless, it fails when it comes to capturing potential real-life bottlenecks. Meanwhile, field data can help with the capture of real-world, true-to-life user experiences, but the metrics are more limited.
PSI will always provide a score at the top of ab report to summarise the performance of the page. This score will be worked out by running Lighthouse in order to collect then analyse lab data regarding the page. If the score comes out at 90 or higher, it is said to be a good score. If the score is 50 – 90, improvements are needed and if the score is under 50, this is said to be a poor score.
When PageSpeed Insights is presented with a URL, it looks it up using the CrUX (Chrome User Experience Report) dataset. Then, if the data is available, it will report the FCP (First Contentful Paint), the FID (First Input Delay), the LCP (Largest Contentful Print) and the CLS (Cumulative Layout Shift) metric data for both the origin URL and, possibly, the URL of the specific page.
PageSpeed Insights will classify field data in three ways, describing the user experience as either good, poor or needs improvement. The thresholds set for each classification are as follows:
PSI will present these metrics distributed in such a way that developers are able to understand the FCP, FID, CLS and LCP range of values for that origin or page. The distribution will also be divided into three separate categories – Poor, Needs Improvement and Good, each one denoted with a coloured bar (red for poor, orange for needs improvement and green for good).
So, as an example, if 14% appears in the orange bar for FCP, this indicates 14% of the observed FCP values have fallen between 1000ms – 3000ms. The data is representative of a page load aggregate view for a 28-day collection period.
PSI also reports all metrics’ 75th percentile to allow developers to see at a glance what the experience is like for users who are most frustrated. Again, these metric values have been classified as poor, needs improvement and good using the thresholds as shown above.
These metrics are a set if common signals that are essential to every web experience. These metrics are CLS, LCP and FID and can be either aggregated at origin or page level. When aggregations have enough data for all three of these metrics, it passes a Core Web Vitals assessment when the 75th percentile of each metric is classified as good.
When there is insufficient data in the aggregation for FID, the assessment is passed when both the CLS and LCP 75th percentiles are classified as good. When either CLS or LCP have a lack of data, the origin or page-level aggregation is unable to be assessed.
There is a difference between the Chrome User Experience Report’s field data and that in PSI in that PSI’s data will be updated on a daily basis over the 28-day trailing period. Conversely, BigQuery’s dataset is updated only monthly.
Lighthouse is used by PSI to analyse a given URL and to generate the performance score to estimate how the page performs on various metrics including FCP, LCP, CLS, TTI, TBT and Speed Index. A green check indicates good, an orange circle indicates needs improvement and a red triangle indicates poor.
Audits are separated by Lighthouse into 3 sections:
The Opportunities section offers helpful suggestions about how a page’s performance metrics can be improved with each one estimating how much more rapidly the page could load should that improvement be implemented.
The Diagnostics section provides extra information about the way in which the page follows web development best practices.
The Passed Audits section indicates which audits the page has passed.
Field data represents a historical report covering the way in which a specific URL has performed. It represents anonymised performance data taken from real-world users on a range of devices under different network conditions. Meanwhile, lab data is taken from a simulated page load on just one device under specific network conditions and this means the values often differ from each other.
Chrome User Experience Report aggregates speed data from real-world opted-in users. This requires that URL to be public (i.e. indexable and crawlable). It also must have enough distinct samples to provide an anonymised and representative view of the URL’s performance. This also holds true when it comes to speed data for an origin.
Again, the root page of the origin must be indexable and crawlable in order to aggregate real-life speed data, and again, enough distinct samples must be available to provide an anonymised and representative view of the origin’s performance on all visited URLs. Therefore, if a URL or origin has no CrUX speed data available, this indicates a lack of suitable samples.