What You Need To Know About The Search Console Core Web Vitals Report
- 28 May, 2020
- Jason Ferry
- SEO experts
Google is now providing a report on websites’ “Core Web Vitals” in the Search Console. A feature first introduced earlier this May, the Core Web Vitals are a set of metrics designed to deliver a good user experience, by measuring the quality of a site’s user experience. Google says these metrics are essential for every web experience, allowing website owners and SEO experts to easily monitor their site’s performance against certain criteria.
Core Web Vitals Measurement In Google Search Console
Replacing the prior Speed report, the Core Web Vitals report roll-out in Search Console has begun.
The Core Web Vitals report demonstrates how Google’s thinking when it comes to user experience has evolved.
It reflects the important message that simply having a website that loads fast is not enough – there are other things you should do to keep users satisfied.
According to Google, a website must also meet certain criteria for visual stability, loading and interactivity for it to offer an excellent user experience.
The Core Web Vitals
Here are the metrics that represent the Core Web Vitals
- Largest Contentful Paint. This function measures loading performance, and identifies the page load timeline when the content has loaded. LCP’s ideal speed is 2.5 seconds or faster.
- First Input Delay. This measures interactivity and determines the user’s experience during their first interaction with a page. It’s preferable to have an FID of less than 100 milliseconds.
- Cumulative Layout Shift. This measures visual stability, and checks the amount of a visible page content’s unexpected layout shift. Pages should ideally maintain a CLS of less than 0.1.
What makes these metrics crucial?
Google rationalises picking the metrics as making up its Core Web Vitals since they have supporting lab diagnostic metric equivalents, can determine critical user-centric results, and are measurable.
How to read the Core Web Vitals Report
In the Core Web Vitals report, you will see your URL performance grouped by metric type, status, and groups of similar web pages.
On the Overview tab, you can switch between the ‘Good’, ‘Needs Improvement’ and ‘Poor’ tabs. From there, click the Open Report to view the page performance numbers for desktop and mobile.
To view details about the URL groups that are being impacted by a particular issue, click on the applicable individual rows in the table.
This works exactly the same way as navigating other reports in the Google Search Console.
Improving The Core Web Vitals
According to Google, it’s best to concentrate your efforts on fixing the metrics that are showing up on the ‘Poor’ label initially, and then focus on what you can do next, based on the issues that are impacting on the most URLs.
To fix each issue, it’s important to consider whether you’d be better off finding a developer for the job, especially if you aren’t familiar with technical problems like these. To make this easy, you can send the downloaded report to the professional you hire.
According to Google, the most common ways to fix pages include the following:
- Maintaining a page size of less than 500KB
- Limiting the amount of page sources to 50
- Using AMP where possible
In a similar way to other Search Console reports, issues that are fixed can be directly validated within the Core Web Vitals report.
What We Know About The Crawling And Indexing Processes Of Google
As everyone knows, the crawling and indexing processes are the first steps in ranking search results. While both processes are important aspects of search engines’ operations, they are often misunderstood or forgotten about.
Google Search Develop Advocate Martin Splitt used a simple analogy with librarians to explained the crawling and indexing processes.
In Splitt’s example, Googlebot (Google’s web crawler) is the librarian and the webpage or site is the book. He explains, “Imagine a librarian: If you are writing a new book, the librarian has to actually take the book and figure out what the book is about and also what it relates to, if there’s other books that might be source material for this book or might be referenced from this book”
Splitt also explained the indexing process, saying, “Then you . . . have to read through [the book], you have to understand what it is about, you have to understand how it relates to the other books, and then you can sort it into the catalog” Here, Splitt is saying that your page’s content is stored in the “catalog”, where it can rank and serve as an answer for relevant queries.
Splitt further explained the two processes in a technical manner, “We have a list of URLs . . . and we take each of these URLs, we make a network request to them, then we look at the server response and then we also render it (we basically open it in a browser to run the JavaScript) and then we look at the content again, and then we put it in the index where it belongs, similar to what the librarian does”.
Why we care. Content must be crawled and indexed first before the page can appear in the SERPs. By understanding the crawling and indexing processes, you can not only fix technical SEO issues, but also ensure your webpages can be accessed by search engines.
These news articles were originally posted on https://www.searchenginejournal.com/google-search-console-core-web-vitals/370591/ and https://searchengineland.com/how-google-crawls-and-indexes-a-non-technical-explanation-video-335245.
Get more traffic for your website and increase your sales at the same time by using professional SEO services. Visit Position1SEO to find out more about how our team can help you.