Fix “Discovered – currently not indexed” problem

Fix “Discovered – currently not indexed” problem

“Discovered – currently not indexed” is the status of Google Search Console. This means that Google knows about a particular page but hasn’t crawled it, and it isn’t currently indexed.

There are three reasons for “Discovered – currently not indexed” URLs: content quality, internal links, and crawl budget.

Each issue has different solutions. Let’s take a look at them.

What does this status “Discovered – currently not indexed” mean?

“Discovered – currently not indexed” means two things. One, Google found your page. secondly, Google does not currently crawl and index your page.

Google’s Search Console help page says why:

Normally, Google wanted to crawl the URL but this was expected to cause the site to overload; So Google has rescheduled the crawl. This is why the date of the last crawl is blank in the report. Source: Index coverage report from Google

This does not mean that your content will not be crawled and indexed. As the Google documentation states, it is possible for Google to crawl your page again later without any action on your part.

how-to-fix-discovery-not-indexed-in-google-search-console How-to-fix Discovery not indexed in Google Search Console V2 1

However, Google’s crawl rescheduling is only one of several possible causes for this issue.

Let’s explore each possible cause of “Discovered – currently not indexed” and how to address them to improve your SEO.

7 Solutions for “Discovered – Currently Not Indexed” URLs

1. Fix content quality issues

Google can’t crawl and index everything on the web. Every website must meet quality standards to be operational. Google will focus on crawling the highest quality pages and It may skip crawling low-quality pages entirely.

Therefore, if your content is not being crawled and indexed, you may need to address its quality.

This does not only apply to pages reported as “Currently not indexed”; It can also relate to the quality of the entire site. John Mueller of Google stated that “Discovered – not indexed” It may be caused by a site-wide content quality issue.

how-to-fix-discovery-not-indexed-in-google-search-console

You can never know exactly how Google ranks the quality of your website. But there are several things you can do to start addressing this problem.

  • Refer to the Quality Assessor Guidelines.
  • Ensure that each affected page contains unique content.

Our article on Quality Assessor Guidelines summarizes the guidelines. Look it up to help you understand the quality of web content as defined by Google. You can then apply Google’s idea of ​​quality to your pages.

If you want to dive deeper into the topic, check out our article on EAT. This is a concept used in the quality rater guidelines to determine the experience, reliability, and reliability of a web page.

See also  The solution to the problem is that the Ministry's service is not available with the detailed steps -

Make sure you have unique content

Google may ignore your URLs if it thinks they are duplicates. Because Google’s resources are limited, it focuses on crawling (and indexing) the most valuable URLs. This entails unique content targeting a specific user intent.

Check affected URLs to make sure:

[…] that you do not accidentally create URLs with different URL patterns, […] Things like parameters in your URL, uppercase and lowercase letters, all of these things can lead to basically duplicate content. And if we detect a lot of these duplicate URLs, we might think that we don’t actually need to crawl all those duplicates because we already have some variations of this page. Source: John Mueller

To sum up what John Mueller said, Double check your website for duplicate content. If you have duplicate content, see our article on how to improve duplicate content. If you have a lot of similar URLs, consider using canonical tags. These tags tell Google to index only the canonical version of your page.

how-to-fix-discovery-not-indexed-in-google-search-console How-to-fix discovery not currently indexed in Google Search Console V2 4

Remember that incorrectly generated canonical tags can be ignored by Google. If Google ignores your canonical tag, you can detect it thanks to the “duplicate, Google chose a different canonical case than the user tag” in the GSC.

See the list of affected URLs and Make sure that each page contains unique content.

This will make your page more likely to be crawled and indexed. In addition, it will add to the quality of your website and increase user satisfaction.

2. Follow internal linking best practices

Googlebot follows internal links to discover and understand different pages on your site. Internal links also help with PageRank, which is an indication of the importance of the page being used to rank.

Let’s say Google doesn’t find enough links coming to the URL. In this case, it may skip crawling due to insufficient signals indicating its importance. Google may assume that pages with weak internal links are not important. As a result, these pages may fall under the “discovered – not currently indexed” status.

Appropriate internal links include linking your pages to create a logical structure. This structure allows search engines and users to understand the hierarchy of your pages and how they are connected.

By using internal links correctly, you both help Googlebot find all of your content and improve its chances of ranking high. In the context of the “Currently not indexed” fix, linking internally to pages that have not been crawled and indexed improves the chances of them being picked up by Google.

how-to-fix-discovery-not-indexed-in-google-search-console - How-to-fix discovery not currently indexed in Google Search Console V2 6

Some best practices for internal linking include:

  • Define your primary content and link other pages to it
  • Apply contextual links within your content
  • Link pages based on hierarchy, for example linking primary pages to supplementary pages and vice versa
  • Don’t spam your website with links
  • Don’t over-optimize the text link
  • Include links to related products or posts
  • Add internal links to orphan non-landing pages

Want to know more? Check out our article on internal links.

You can also contact Go Start Business to improve your internal links.

3. Prevent Google from crawling and indexing low-quality pages

Allowing Google to pass through your entire website without restriction has two negative consequences.

First of all, Googlebot will visit every page on your website until it runs out of its crawl budget. If Googlebot crawls low-quality pages, it may reach its crawl threshold before it reaches your canonical pages.

Second, if you allow Google to crawl and index low-quality pages, it may not think much about the quality of your pages. Your entire website. This can damage your rankings but also reduce crawl demand, creating a vicious circle of crawl budget issues.

Low quality pages include:

  • Old content
  • Pages generated by a search box within a website
  • Duplicate content
  • Pages created by applying filters
  • Auto generated content
  • User generated content

If you are already experiencing non-indexed content, you should prevent Google from crawling and indexing these pages.

Prevent low-quality pages from being crawled in robots.txt your And use meta tag noindex to prevent indexing.

how-to-fix-discovery-not-indexed-in-google-search-console How-to-fix discovery not currently indexed in Google Search Console V2 5

Need to decide on your indexing strategy? See our article on how to create an indexing strategy for your website.

4. Create an enhanced sitemap

An enhanced sitemap can guide Googlebot through the crawling and indexing process. It is basically a map that Google uses to navigate your content.

However, if your sitemap is not optimized properly, it can negatively impact your crawl budget and cause Googlebot to miss out on your important content.

Your sitemap must contain:

  • URLs respond with 200 (OK) status codes
  • URLs without meta robots tags that prevent them from being indexed
  • Only the most basic versions of your pages

The screenshot below is an example of an XML sitemap index file.

how-to-fix-discovery-not-indexed-in-google-search-console - How-to-fix discovery not currently indexed in Google Search Console 6

If you want to learn more about optimizing your sitemap, check out this ultimate guide to XML sitemaps.

5. Fix redirects

you need to Avoid forwarding strings and loops.

Redirect chains are when you want to redirect traffic from page A to page B but don’t have to redirect to page C first.

Redirect loops are when you create a redirect chain that starts and ends on the same page, trapping users and bots in an endless loop.

Both redirection chains and loops force Google to send multiple unnecessary requests to your server, reducing your crawl budget.

To avoid spending your crawl budget on unnecessary redirects, do not link to redirected pages. Instead, update it so that it points to a good 200 pages.

Ensure that you adhere to best practices for implementing redirects.

6. Fix overloaded servers

Crawling issues may be caused by the server being overloaded (the response is slower than expected). If Googlebot cannot visit a certain page due to server overload, it will reduce crawl activity (crawl request). This may cause some of your content to not be crawled.

how-to-fix-discovery-not-indexed-in-google-search-console How-to-fix Discovery not indexed in Google Search Console V2 7

Google will attempt to revisit your website in the future, but the entire indexing process will be delayed.

You should check with your hosting provider for any server problems on your site.

In the meantime, check out Crawl stats report on the Google Search Console. Open the report, select your domain, and click Average response time (in milliseconds). This will show you the time it takes for your server to load. You will likely notice a correlation between your total crawl requests and average response time.

how-to-fix-discovery-not-indexed-in-google-search-console How-to-fix Discovery not indexed in google search console 9 1

To learn more about the web performance and crawl budget connection, read our article on web performance and crawl budget.

7. Fix resource-intensive websites

Websites full of resources are another cause of crawl issues.

If a page calls for many additional resources to be crawled and rendered (such as multiple CSS style sheets or JavaScript files), it has a particularly negative impact on your crawl budget.

That’s because every resource Googlebot uses to display your page counts toward your crawl budget.

You should optimize your site’s JavaScript and CSS files (top offenders). Optimizing these files will reduce the negative impact of your code.

When optimizing “Discovered – currently not indexed” pages

In some cases, URLs with a “discovered – not currently indexed” status do not need to be updated. You do not need to do anything if:

  • The number of affected URLs is low, and they are crawled and indexed over time.
  • The report contains URLs that should not be crawled or indexed, for example, those containing canonical or “noindex” tags, or those blocked by robots.txt.

It is important that you check if your URLs should be crawled in the first place. It is normal for some pages to be reported as “discovered – not currently indexed”. but if:

  • The number of URLs has increased
  • The canonical URLs were “discovered – not currently indexed”

Then you need to check and improve the affected URLs as this can lead to a significant drop in ranking and traffic.

URL Inspection Tool

Once you decide to update your content and URLs, you can request that certain pages be indexed across Google URL Inspection Tool.

Open the URL Inspection tool on Google Search Console. Paste the URL you want indexed into the search bar at the top of the page.

how-to-fix-discovery-not-indexed-in-google-search-console

Then click on the “Request Indexing” button.

how-to-fix-discovery-not-indexed-in-google-search-console

Using the Request Indexing URL Inspection tool does not guarantee that a particular page will be crawled and indexed. It just sends a signal to Google that you want this page to be crawled and indexed at high priority.

Encapsulated URLs

inDiscovered – currently not indexed” caused by site quality, internal links, and crawl budget issues.

Here are the key points that can help crawl and index your pages:

  • Check the quality and authenticity of your affected pages
  • Implement internal links, especially on vital pages
  • Use the robots.txt file to prevent Googlebot from crawling low-quality pages
  • Develop an indexing strategy that focuses on the most important pages
  • Optimize your crawl budget, so Google has more resources to crawl these pages.

Do you also get pages with the status “Crawled – not currently indexed?” Learn how these URLs are indexed in the “Crawled – Not yet indexed” directory.

Leave a Reply

Your email address will not be published. Required fields are marked *