Back to Blog
#indexing #google-search-console #technical-seo #checklist
Google Search Console showing the 'Discovered — currently not indexed' status

Discovered — currently not indexed: What It Means (and What to Do Next)

Google found your URL but won’t index it yet. Here’s what ‘Discovered — currently not indexed’ really means in GSC, the most common root causes, and a practical fix checklist.

If Google Search Console (GSC) shows “Discovered — currently not indexed”, here’s the core answer:

To fix it, don’t spam “Request indexing”. Instead, follow this 5-step checklist: confirm canonical → strengthen internal links → clean low-value URLs/sitemap → improve page value → wait & re-check.

This page explains what the status means, how to avoid the common traps, and gives you a practical, executable workflow.

What “Discovered — currently not indexed” actually means

Google is effectively saying:

  1. “We found this URL via a sitemap or link.”
  2. “We’re not spending crawl/index resources on it yet.”

That “yet” is important: sometimes this resolves on its own. Often it doesn’t—until you change the underlying signals.

These three are easy to mix up:

  • Discovered — currently not indexed → Google knows the URL exists, but hasn’t prioritized crawling.
  • Crawled — currently not indexed → Google already crawled it, but decided not to index (often quality/duplication).
  • Duplicate, Google chose different canonical than user → canonical/duplication problem first.

If you’re unsure which you have, start with URL Inspection in GSC and compare declared vs selected canonical.

Signal → likely cause → what to do (and what NOT to do)

Use this table to avoid wasting days on the wrong fix:

GSC / URL Inspection signalLikely causeDo thisDon’t do this
Discovered — currently not indexedCrawl/index prioritization (weak internal links, too many low-value URLs)Strengthen internal links, clean sitemap/low-value URLs, then waitSpam “Request indexing” repeatedly
Crawled — currently not indexedGoogle crawled but decided not to index (often thin/duplicate/soft-404)Improve unique main content, consolidate duplicates, validate 200 + canonicalKeep making tiny edits daily (thrash)
Duplicate, Google chose different canonicalCanonicalization/duplicationFix canonicals + redirects + internal links to canonicalTreat it like a crawl budget issue
Indexed but 0 impressionsIndexed, but not associated with meaningful queries yetImprove intent match, strengthen internal links/E‑E‑A‑T, then observe impressions for 7–14 daysDiagnose as “indexing” forever

If you want the long-form indexing diagnosis playbook, use this companion guide:

The 80/20: why it happens

In practice, this status usually comes from one (or a mix) of these buckets:

1) Weak discovery signals (internal linking is too light)

Google discovered the URL (maybe via sitemap), but it doesn’t see strong site-level reasons to crawl it soon.

Fixes

  • Add contextual internal links from relevant, crawlable pages (not just nav/footer)
  • Link from hub pages and from pages that already get crawled frequently
  • Avoid orphan pages

2) Too much low-value URL noise (crawl prioritization problem)

If your site generates lots of thin URLs (tag pages, search pages, parameter variants), Google may deprioritize new URLs.

Fixes

  • Reduce indexable junk: noindex thin tag/search pages
  • Consolidate parameter variants with canonicals
  • Ensure your sitemap is clean (only canonical, index-worthy URLs)

3) Early quality/value assessment (thin or duplicative content)

Google may avoid crawling/indexing pages that look unhelpful or very similar to other URLs.

Fixes

  • Add substantive main content (not placeholders)
  • Make the page’s purpose explicit in the first screen
  • Remove near-duplicates; consolidate where needed

4) Technical blockers (less common, but catastrophic when present)

Even if GSC says “discovered”, technical issues can still prevent progress.

Fixes

  • Confirm the URL returns 200 reliably
  • Check robots.txt isn’t blocking required resources
  • Check noindex (meta + X-Robots-Tag)
  • Check canonical consistency (www/non-www, http/https, trailing slash)

What to do (practical checklist)

Use this sequence to avoid thrashing:

  1. Confirm it’s the canonical URL

    • In URL Inspection check:
      • Google-selected canonical vs user-declared canonical
      • If they differ, fix canonical strategy before anything else.
    • Sanity check: internal links + sitemap should point to the canonical version.
  2. Make the page “worth crawling”

    • Make the purpose obvious in the first screen (what problem it solves).
    • Add unique main content (not placeholders or boilerplate).
  3. Strengthen internal links (discovery signal)

    • Add 3–10 contextual links from pages that are already indexed and crawled often.
    • Prefer links from:
      • hub pages / category pages
      • related posts with traffic/impressions
      • high-authority pages (homepage/features/docs)
  4. Reduce low-value URLs (prioritization signal)

    • “Clean sitemap” standard: include only canonical, 200, index-worthy URLs.
    • Noindex thin tag/search pages and avoid parameter spam in sitemaps.
  5. Wait, then validate

    • Give Google time to recrawl (hours→days; sometimes weeks).
    • Re-check URL Inspection; only then consider a single “Request indexing” for high-value URLs.

When to use “Request Indexing” (and when not to)

Use it when:

  • You fixed a real blocker (noindex/canonical/redirect chain)
  • The URL is important (money page/docs/high-intent post)

Don’t use it when:

  • You haven’t changed anything meaningful
  • The site is producing many low-value URLs (you’ll just churn)

How long does it take to move out of this state?

Realistically:

  • Hours → days for small sites with strong crawl signals
  • Days → weeks if the site is large, low-authority, or generating lots of low-value URLs

If you’re constantly editing the page every day, you can accidentally keep the URL in “permanent limbo”. Fix the root cause, then wait for a recrawl.

FAQ

What is the difference between URL inspection and live test in GSC?

  • URL Inspection (index status) tells you what’s currently in Google’s index + canonical selection.
  • Live Test tells you whether Google can fetch/render the page right now. A successful Live Test does not guarantee indexing.

Why does GSC say “URL is not on Google” though my sitemap is submitted?

A sitemap helps discovery, but it doesn’t force indexing. If Google already shows “Discovered…”, you’re usually dealing with prioritization, canonical/duplication, or page value—not “Google can’t find it”.

What is the daily limit / quota for “Request indexing”?

Google doesn’t publish a simple fixed number that always applies. Treat it like a scarce resource:

  • only request indexing after you fix the root cause
  • prioritize money pages/docs/high-intent posts

Is this a penalty?

Usually no. It’s more often a resource allocation / quality selection issue.

Should I block low-value pages with robots.txt?

If you want them not indexed, prefer noindex (for pages you can crawl) so Google can see the directive. Use robots.txt carefully—blocking can prevent Google from seeing canonicals/noindex signals.


Want a page-specific action plan?

Run a Traffly check to convert GSC statuses into prioritized fixes (canonicals, robots/noindex, internal links, and content flags).

Analyze My URL
L

Lucas

Editor at Traffly Blog