What is robots.txt Builder?
A robots.txt file gives crawlers site-level access rules. It can allow or disallow paths, point to the sitemap, and document crawler policies. Static sites often need a small, predictable robots.txt file because build outputs and GitHub Pages deployments only publish what exists in the final folder.
How to use this tool
- Enter the public site URL and sitemap URL.
- Choose whether normal crawlers should be allowed across the site.
- Add disallowed paths only when there is a real reason to block crawling.
- Copy the result into robots.txt at the published site root.
What you can use it for
- Create a clean robots.txt file for GitHub Pages.
- Add a sitemap reference without hand-writing the file.
- Document public crawler access before launch.