Imagine that the internet is like a book and that each web page is a chapter of that book. Now, imagine what it would be like to search for a chapter in a book that has millions and millions of pages. Without an index, it would be crazy! Right?
When you do a Google search, for example, you are actually searching the index of that book (internet), which in this case would be the Google index. And this in turn shows you a list, of the most relevant websites, where you can find the information you are looking for.
Summary
- Indexing is the process by which web search engines add information to their search indexes.
- In addition, it is the process that allows the information you publish on the Internet to be used by the people who need it.
- You can employ techniques to facilitate the indexing process and, in turn, achieve better results in terms of positioning the content of your website.
What is indexing?
Indexing is the process through which information is added to a search index. Yes, to an index! That is, it is the way in which data is added to that ordered list called an “index” which is very helpful when searching for information within a collection of content.
How does the indexing process work?
According to a document published by Google in 2010, we can perceive web indexing as a complex system that ranges from the collection of information to the personalized search index that is shown to users.
These are the most important steps in the process:
Crawling
First, search engines must find the information on your website, and this is done through crawling programs or crawler bots (1). Likewise, these bots enter websites pretending to be a user’s visit and move between the pages of the site, following the links between them.
Index
Once the information has been crawled, the bot processes it to better understand its content (1). To do this, it evaluates key aspects of your website and then adds them to the search engine index. Once there, they are ranked according to the internal algorithms of the indexing program.
Publishing
The last phase of the process is the publishing of custom indexes. And this originates in response to a user’s query in the search engine (1). Being here, is also, the stage where it is decided if the content of your website is valuable for the user or not.
Why is it important to index your web content?
Indexing is what allows the content of your website to be visible to the eyes of users browsing the Internet, through search engines. In addition to giving visibility, it also allows you to generate more visits to your website, and increase conversions and sales.
How to index the content of my website?
Normally, search engines index URLs automatically, that is, without the need for you to do anything extra to make it happen. However, it is usually a somewhat lengthy process and if your website is new it may take longer than usual.
Likewise, there are some methods through which you can speed up the crawling process of a website.
Method | Description | Specifications |
---|---|---|
Direct tracking request. | Search engines have tools that you can use to send a direct request to crawl your URLs. Google: Google Search Console, Bing: Bing Webmaster Tools, Yahoo: uses the same index as Bing | 1. Register and log in. 2. Select URL inspection (Google) or set up my site (Bing). 3. Enter your URL in the “Submit URL” option. 4. Select “Submit”. |
Through a website map. | If you need to index many URLs it is advisable to create and submit a sitemap of your website to the search engines. | You can create it manually or use a tool to do it. (SE Ranking, Screaming Frog, Wonder WebWare, Sitemap Generator, SiteMap Creator By Inspyder, among others). |
Through the QMS update services. | WordPress, Hub (HubSpot), Wix and Blogger are content management systems that allow you to submit crawl requests to search engines. | There are two options: 1.-Manual update, 2.-Automatic update. Note: in the case of WordPress there are different plugins available to do this (Yoast SEO, Google XML Sitemaps). |
What factors affect the indexing of a URL?
It is important to mention that crawling robots are not able to process all types of information. Therefore, some websites may not be able to index all of their content if they do not take the necessary measures.
Google, for example, can only process content that has text. So it cannot analyze videos, images or photos, nor the text that appears in them, so you should always place enough text in your posts.
Some aspects that affect the indexation are:
- Pages blocked with robots.txt files.
- Web sites that cannot be accessed anonymously.
- Duplicate content.
- Pages containing rich media files.
- Multiple versions of the same site (international versions, etc.).
- Content that requires plug-ins such as Java or Silverlight.
- Content rendered in Canvas.
How do I know if my web content has been indexed?
There are several ways to know if your website has been indexed. The easiest and quickest way is to place the name of your website in the search engine’s browser, and if it has been indexed, it will appear in the first position of the search results.
This applies to all search engines, although the first search engine you should check is Google since it processes 85.55% of all internet searches.
Google is the search engine with the most advanced indexing process in existence, and this is what sustains its great popularity. Users know that Google always shows them what they need and that feeling of security is priceless.
Manually checking the indexing of a URL
To check the indexing manually you can use the command “site” in your browser: site:you-domain.com and the result will be a list of all the URLs that Google has indexed for your domain.
Another way to do it is through the command: cache:your-domain.com and the browser will show you the date and time of the last indexing in Google.
You can also use the “site” command to check how many pages Bing has indexed from your website.
Checking indexing through Google Search Console.
Google Search Console is a tool that provides a “coverage report” that shows a history of the pages on your website that have been indexed and their indexing status.
It also offers a quick check option which you can access from the “Google Index” option and then select “Indexing Status”. With this option, you will be able to see the number of pages that have been indexed correctly and how many have not.
How to improve the indexing of your website?
Remember that it is not only about being registered in the index. You can have unbeatable content, but if your pages are not well indexed, they will not appear in the SERP shown to a user after a query.
These are some measures that will help you improve the indexing process of your website.
Keep up to date with the content of your website.
Keep a proper rhythm of publishing and updating content on your website. You can publish daily or every 2 days, what you must take into account is that this rhythm will condition the crawler to index the content of your website with the same frequency with which you publish it.
Focus on the internal organization of your website.
When we use the term “internal organization” we refer to the way in which all the content of your website is grouped, ordered, and related. This is an aspect that is related to the experience that the user has when browsing your website.
As we mentioned in the crawling phase, bots enter web pages pretending to be a user. And this is because, for search engines, user experience is a very important factor.
What it should include | What are the benefits for users | What are the benefits for SEO |
---|---|---|
Well-defined keywords and target topics | Help users understand what your site is about (3). | Helps search engines understand the content you want to rank for (3). |
Clear and well-defined categories | Users can find what they are looking for more easily (3). | Grouping content by topic suggests that you can cover the subject of your site in depth (3). |
URLs with a good structure, i.e. they should show a hierarchy | Users can find your website just by using the browser (3,4). | This makes it easier to crawl and index new or updated content on your website (3,4). |
Clear navigation menus | Users can move more easily between pages of your website (3). | Gives relevance to the most important pages of your website (3). |
There should be little depth between the pages of your site | Users can access your website content in less than three clicks from the homepage (3). | Internal links help to keep your pages from getting lost and prevent you from having empty pages (3). |
Have an HTML WebSite | It is visible to the user, so they can directly access all the information on your website (3,4). | Helps search engines to find new pages more quickly. |
There should be good internal link management | Internal links help users to navigate through the pages of your website more easily (3,4). | The good use of internal links favours the tracking, indexing and positioning process of your web content (3,4). |
Create a sitemap.
The sitemap is a document that includes the URL addresses that you want search engines to crawl and index. Especially, if your web pages have videos, images, news, etc.
As already mentioned, search engines cannot process this type of information on their own. And it is the (4) sitemaps that allow them to crawl and index this information in a more efficient way.
How do I know if I need a sitemap?
Here is a summary that can help you decide whether or not you need a sitemap
You need it | You don’t need it |
---|---|
When your site has more than 500 pages | If your site has less than 500 pages |
If you do not make good use of internal links | When you connect your site completely with internal links |
In case your site is new and you do not have external links that redirect to your site | You don’t publish much multimedia type information |
If your site has a lot of rich media content (videos, photos, images or appears in Google news) | You don’t publish a lot of multimedia content |
Index only content that is of value to your audience.
Pages of value are those that respond to the search intent of your website. So all those pages that answer a question or fill a need are considered valuable pages, and therefore should be indexed. Here are some examples of content that is of little value for indexing:
- Contact pages,
- Website internal search results,
- Privacy policies,
- Cookie policies,
- Legal pages,
- Shopping carts,
- Test pages, etc.
These are pages that may be necessary for your website but are not actually content you want to rank for. Therefore, to prevent the crawler bot from processing these pages, you must manage the robots.txt files or meta tags.
Use the robots.txt files correctly.
The first contact the crawler program has when it arrives at your website is with the robot.txt file. This is to verify the URLs that should not be processed by (4). That is why, if you have pages on your site that you do not want to index you should edit this file.
Meta tags.
These are tags used to tell crawler robots not to display a page in the search index. That is, unlike the robot.txt file, the bots can process the information, but should not display it.
Manage duplicate URLs.
Search engines describe duplicate content as a block of information that looks like or is identical to another one found on that same domain or on another domain. Most of the time it occurs unintentionally and happens without us even realizing it. Some cases in which duplicate content occurs are:
- Discussion forums that generate pages for different browsing devices.
- Elements of a store that are displayed across multiple URLs.
- Different print versions of the same web page, etc.
And even if it is not intentional, it can be detrimental to your website, so it is advisable that you manage the content you publish correctly.
Minimize errors and broken links.
Errors and broken links are aspects that harm both the crawling and the user’s experience on our website. Therefore, it is important that you detect and manage errors and links that lead nowhere.
You can use some of these tools to detect errors and broken links on your website:
W3C Link Checker
Screaming frog
Google WebMaster
How to remove content from Google’s index?
Here is how to proceed in case you need to remove content from the search index. This action may be necessary in the following cases:
- Content that was indexed by mistake.
- Pages that you have deleted.
- URLs that change from index to noindex status.
- If you have indexed subdomains.
What is web deindexing?
This refers to removing content from the search engine index that you no longer want to be visible to users. In other words, it is to remove certain content from an index, so that it does not appear in the SERP of a search engine.
How to deindex a page from Google’s index?
There are several ways to de-index a URL from the search engine index. Here are a few options to give you an idea of how it is done.
- Add the meta tag “noindex” to the URL you want to deindex. This will tell the crawler bot not to index your content.
- You can use some tools, such as Google Search Console.
- Include the 404 status code to the URL, this will tell the bot that the content of the page is not available, but might be available in the future.
- You can also add the status code 410 to the URL, this will tell the bot that the content is not and will not be available.
Conclusion
Although it may not seem like it, indexing is a key aspect that has an impact on the positioning of your website. Therefore, it should not be underestimated and, on the contrary, it should be handled in detail. Use our recommendations as a guide that will allow you to improve the indexing process and thus facilitate the work of search engines.
Remember that if your content is not well-indexed, even if it is of very good quality, it will not appear in the results pages shown to users after a search engine query. And that translates into less visibility, fewer conversions, and fewer sales.