Basics to JavaScript SEO: A Guide by Google

One of the important parts of the web platform is JavaScript. It helps to convert the web into a good powerful platform by applying many features. If your web application, which is powered by JavaScript, is discovered by Google Search engine then you will get many new users. It will also make the existing users revisit your website for the information you are providing.

Now, JavaScript SEO is a department of SEO branch which focuses primarily on developing the websites which use JavaScript. They help these websites to get into the spotlight of the Google Search engine.

Guide to JavaScript SEO Basics

A guide to JavaScript basics was released by Google. This document covers all the basics on the JavaScript SEO basics.

The Guide 

The guide begins by telling about the three-step procedure to process the JavaScript

  • Crawling.
  • Rendering. 
  • Indexing.

After that, the guide covers some easy tips making a Google-friendly content using JavaScript. The tips are:

1. Make the Titles and Snippets Unique

Using unique and relative titles and useful meta descriptions make it easier for users to recognize the best outcome for their search.

Now let’s see how you can create an effective and relatable title.

  •  Create an effective and relatable title.  

Titles are an important part of the content. A title provides an insight into the content to the users. It mainly acts as information, on the basis of which a user will click on it.

So, here are a few tips to create a good and effective title:

  • First, you have to ensure that the title on your page is in the <title> tag.
  • The titles should always be precise and to the point. You should avoid descriptors like “Home” for your home page and “Profile” for some specific profile at all cost. Also avoid lengthy and verbose titles, as these are most likely to get truncated as soon as they will show up in the search results.
  • Keyword stuffing is another thing to avoid at all cost. Sometimes it is very helpful to have some related terms in the title, however, it is not good to use the same word or phrase over and over again throughout the article.

Some titles like “foo bars, foo bar, Foobar, foobars” does not help the users at all. This will make Google recognize your result as spammy.

  • You should brand your title very precisely. The title is the most appropriate place to give some additional information regarding your website. For example  “ExampleSocialSite, a place for people to meet and mingle”. However, the using of this same text on every page of your site will affect the readability. 

For this case, you can consider using your site name at the beginning and ending of your page.

2. Always Write Codes which are Compatible

You have to follow the guidelines of Google to make sure that your code is compatible with the Google bots. This will also prevent you from getting into JavaScript problems.

To avoid JavaScript issues follow the given steps:

  • To check the procedure of crawls and rendering a URL by Google, you can use the Mobile-Friendly Test or you can use the URL inspection tool. By using these tools, you can see the loaded resources, rendered DOM, JavaScript console output and many other information by clicking on the more information link.
  • You need to consider that Googlebot will decline your user permission request. This happens because the Googlebot cannot make sense of any user permission. In that case, you have to facilitate a way for the users so they can view your content without having them to allow access to their camera.
  • Ensure that your web contents are search-friendly.
  • For hiding your implementation details, you can use shadow DOM.
  • Try putting your content in light DOM.  

3. Give your Code an HTTP Status

You can make use of a status code to let the Googlebot know of what you want to do with a page. Using a status code you can tell whether you want your page not to be crawled, or if a page has been removed to a different URL. 

Given below is a list of HTTP status codes, and where to use them:

301/ 302- For pages which has been removed to a new URL

401/ 403- For the pages which are unavailable for permission issues

404/ 410- For page which is no longer available.

5xx- When something goes wrong with the server.

4. Make use of the Meta Robots Tags Properly

If you want to prevent your page from being indexed by the Googlebot, you can do it by using the meta robots tag.

For example, if you want the Googlebot to not index your page, you can add the following tag at the top of your page:

<!– Googlebot won’t index this page or follow links on this page –>

<meta name=”robots” content=”noindex, nofollow”>

You can also change a meta robot tag on your page using JavaScript. This will prevent Google bot from indexing your page. The following code will show enable you to do the same:

fetch(‘/api/products/’ + productId)

  .then(function (response) { return response.json(); })

  .then(function (apiResponse) {

    if (apiResponse.isError) {

      // get the robots meta tag

      var metaRobots = document.querySelector(‘meta[name=”robots”]’);

      // if there was no robots meta tag, add one

      if (!metaRobots) {

        metaRobots = document.createElement(‘meta’);

        metaRobots.setAttribute(‘name’, ‘robots’);

        document.head.appendChild(metaRobots);

      }

      // tell Googlebot to exclude this page from the index

      metaRobots.setAttribute(‘content’, ‘noindex’);

      // display an error message to the user

      errorMsg.textContent = ‘This product is no longer available’;

      return;

    }

    // display product information

    // …

  });

When the Googlebot will see noindex in its meta tag before executing JavaScript, it will not index your page.

5. Repair Images and Using Lazy-Load content 

Using images on your webpage is not very cost effective in case of data transfer and performance of the website. 

Lazy-loading is a good strategy while using images on your webpage. It allows the images to load when the user is about to see them.

To successfully implement lazy-loading, you need to follow the following process.

  • Load the content when it gets visible in the viewport

For the Googlebot to see all the content on your page, you have to ensure that the applied lazy loading loads all the important contents when they are visible in the viewport. 

Here are some ways of how to do it:

  1. IntersectionObserver API and polyfill.
  2. Using of JavaScript library which supports the loading of data at the time of entering the viewport.
  • Use paginated loading to enable Infinite scroll

If you want to apply an infinite scroll in your page, you need to support paginated loading. 

Paginated loading is a very important thing. It allows the users to re-engage and share your content. It enables Google to display a link up to a specific point, and not the top of an infinite scrolling page.

For making the paginated loading to support, use an original link to every that can be shared and loaded directly by the users. You can use History API for updating the URL.

  • Test

After you have completed the process, you need to be sure that it is working properly. One of the ways to do this is by using Puppeteer script to quickly check your implementation. 

To run it, you will need a Node.js. Follow the given steps to verify the script and run it:

git checkout https://github.com/GoogleChromeLabs/puppeteer-examples

cd puppeteer-examples

npm i

node lazyimages_without_scroll_events.js -h

After you have run the script, you will need to review the screenshot taken to check that it contains all the contents that you have expected to be visible and be indexed by the Googlebot.

Conclusion

So, these are the basic SEO guides for JavaScript. Using JavaScript on your website will improve your website presence in the Google search engine.

Hence, this guide will help you to optimize your website drastically.