Understanding Robots.txt
The robots.txt file is a standard used by websites to communicate with search engine crawlers and other web robots. It tells bots which parts of your site they can and cannot access.
What is robots.txt?
Robots.txt is a plain text file placed at the root of your website (e.g., example.com/robots.txt) that follows the Robots Exclusion Protocol. It contains rules that tell crawlers which URLs they can access on your site.
How to Use This Tool
Paste your robots.txt content into the text area (or fetch it from a domain), enter a URL path you want to test, select a user-agent, and click 'Test URL'. The tool will instantly tell you if the path is allowed or blocked.
Why Test Your Robots.txt?
- Prevent accidentally blocking important pages from search engines
- Ensure private or admin pages are properly hidden from crawlers
- Debug crawling issues before they impact your search rankings
- Validate changes before deploying to production
Privacy Guaranteed
This tool runs entirely in your browser. Your robots.txt content and test URLs are never sent to any server. Perfect for testing rules that contain sensitive paths.