What We Know About The Crawling And Indexing Issues Associated With Using JavaScript
- 17 May, 2018
- Jason Ferry
- Crawling And Indexing Issues
In the recent annual developer conference held by Google, certain issues about the usage of JavaScript have been discussed. It was also revealed that Googlebot follows a two-phase approach when processing JavaScript-powered web pages. During these phases, webmasters and SEO experts must note that crawling and indexing issues on website contents may arise when using JavaScript.
In the two-phase indexing by Google, an initial crawl will happen at first, followed by a second one just a few days later. In the second wave of indexing, canonicals and metadata won’t be checked anymore. Let’s say that these things haven’t been rendered during the first phase. There’ll be a possibility that Googlebot will miss them completely, and this affects your search engine optimisation efforts, particularly the indexing and ranking of your web pages.
John Mueller of Google emphasised that these issues can turn into major ones when left unchecked. Remember that metadata, HTTP codes, and canonicals have significant contributions as to how search crawlers understand the content on your website. So if they aren’t being rendered properly, this poses a threat to your overall indexability. Later on, Mueller suggested to make use of dynamic rendering. This means sending fully server-side content to Googlebot and serving the JavaScript-heavy one to users.
This valuable information was gathered from https://www.thesempost.com/google-javascript-googlebots-indexing-can-miss-metadata-canonicals-status-codes/. Click here for more details.
If you want the key elements of your site to be checked, then have a look at our SEO company page and get free SEO audit for every new enquiry.