Noindex, Nofollow - Understanding and Using Meta-Robots-Tags Correctly
Meta-Robots-Tags are HTML meta elements that allow website operators to specifically control search engine crawlers. They provide instructions on how a page should be crawled, indexed, and linked.
What are Meta-Robots-Tags?
Basic Functionality
Meta-Robots-Tags function as direct communication between website and search engine. They are placed in the <head> section of an HTML page and give crawlers specific instructions.
The Most Important Meta-Robots Directives
Noindex - Prevent Indexing
The noindex directive prevents a page from being included in the search engine index.
<meta name="robots" content="noindex">
Use Cases for Noindex:
- Test and development pages
- Private areas (login, admin)
- Duplicate content (e.g., print versions)
- Temporary pages
- Pages with low value for users
Nofollow - Don't Pass Link Juice
The nofollow directive prevents PageRank or link juice from being passed through links.
<meta name="robots" content="nofollow">
Use Cases for Nofollow:
- User-generated content
- Sponsored links
- External links to untrustworthy pages
- Internal links to unimportant pages
Combined Directives
Often multiple directives are combined:
<meta name="robots" content="noindex, nofollow">
<meta name="robots" content="index, nofollow">
<meta name="robots" content="noindex, follow">
Meta-Robots-Tags in Detail
Indexing Directives
Crawling Directives
Additional Directives
Practical Use Cases
E-Commerce: Product Variants
For product pages with many variants (color, size), noindex can be useful for variant pages:
<!-- Main product page -->
<meta name="robots" content="index, follow">
<!-- Product variants -->
<meta name="robots" content="noindex, follow">
Blog: Category and Tag Pages
Often category pages have little unique content:
<!-- Main article -->
<meta name="robots" content="index, follow">
<!-- Category overview -->
<meta name="robots" content="noindex, follow">
Corporate Websites: Legal Pages
Imprint, privacy policy and terms of service usually have no SEO value:
<meta name="robots" content="noindex, nofollow">
Common Mistakes and Pitfalls
1. Incorrect Implementation
❌ Wrong:
<meta name="robot" content="noindex">
✅ Correct:
<meta name="robots" content="noindex">
2. Contradictory Directives
❌ Problematic:
<meta name="robots" content="noindex, index">
3. Forgotten Canonical Tags
For noindex pages, canonical tags should be removed:
❌ Wrong:
<meta name="robots" content="noindex">
<link rel="canonical" href="https://example.com/page">
✅ Correct:
<meta name="robots" content="noindex">
X-Robots-Tag as Alternative
For dynamic control, X-Robots-Tags can be set in the HTTP header:
X-Robots-Tag: noindex, nofollow
Advantages of X-Robots-Tag:
- Works with non-HTML files too
- Can be set dynamically server-side
- Allows multiple headers
Testing and Validation
Google Search Console
Check the indexing status in GSC:
- "Coverage" → "Excluded"
- Check "Excluded by 'noindex'"
Browser Tools
Use browser developer tools:
- F12 → Elements
- Search for
<meta name="robots">
Online Tools
- Google Rich Results Test
- Screaming Frog SEO Spider
- SEO Spider Tools
Best Practices for 2025
1. Strategic Application
- Use
noindexonly for justified cases - Document all
noindexdecisions - Regular review of indexing
2. Performance Optimization
- Combine with other technical SEO measures
- Use X-Robots-Tag for better performance
- Implement server-side when possible
3. Monitoring
- Monitor indexing changes
- Check crawl budget efficiency
- Analyze impact on rankings
Checklist: Using Meta-Robots-Tags Correctly
Before Implementation:
- ☐ Page goal defined
- ☐ SEO value assessed
- ☐ Alternative solutions checked
- ☐ Impact on link juice considered
During Implementation:
- ☐ Correct syntax used
- ☐ Contradictions avoided
- ☐ Canonical tags adjusted
- ☐ Sitemap updated
After Implementation:
- ☐ GSC status checked
- ☐ Crawling behavior observed
- ☐ Performance metrics analyzed
- ☐ Regular reviews scheduled