開発中

Robots.txt ジェネレーター |

広告

一般指令

オーバーライドを階層化する前に、すべてのクローラーのデフォルトの動作を構成します。

User-agent: * のグローバル許可またはブロック ルールを設定します。

サーバーに余裕が必要な場合は、クローラーを抑制します。

ミラーリングされたドメインのオプションのホスト ディレクティブ。

1 行につき 1 つのパス。ワイルドカードと末尾のスラッシュをサポートします。

より広いパスがブロックされている場合でも、特定のフォルダーがクロール可能であることを確認します。

1行につき1つのサイトマップURLを入力してください。追加のサイトマップインデックスがある場合は追加してください。

一般的なクローラー

完全にブロックしたいクローラーのスイッチを切り替えます。上記のデフォルトルールを適用したままにしておきます。

カスタムルール

カスタマイズされた許可またはブロック ディレクティブ、クロール遅延、サイトマップのヒントを使用してユーザー エージェントを追加します。

上記の生成されたファイルをコピーし、robots.txt としてドメインのルートにアップロードします。

robots.txt ファイルを生成して、検索エンジンのクロール動作を制御します。
Table of Contents

Robots.txt is a small text file that guides search bots on your site. It tells crawlers which areas they can access and which paths they should avoid. This keeps crawling focused on pages that matter and reduces wasted visits on low-value URLs.

Use robots.txt to block areas like admin pages, staging folders, test URLs, filter pages, and duplicate paths. When your rules are clear, search engines spend more time on your important pages. That can help new content get discovered faster and keep crawling clean and predictable.

Robots.txt is part of the robots exclusion standard. You place it at:

yourdomain.com/robots.txt

Search engines often check this file early because it gives them clear crawling directions. If your site is small, it may still get indexed without a robots.txt file. But on larger sites, missing guidance can lead to wasted crawling and slower discovery of key pages.

One important point:

  • Robots.txt controls crawling
  • It does not guarantee indexing

If you want to confirm that a page can appear in search results, use an indexability check. That helps you spot signals like noindex, blocked resources, or other issues that robots.txt does not cover.

Search engines do not crawl every page every day. They crawl based on limits and signals such as site speed, server health, and how often your content changes.

If your site is slow or returns errors, crawlers may visit fewer pages per run. That can delay indexing for new posts and updated pages. Robots.txt helps by reducing wasted crawls, so bots spend more time on the pages you actually want them to focus on.

For best results, use robots.txt with a sitemap:

  • Robots.txt guides bots on what to crawl or skip
  • Sitemap lists the pages you want crawled and indexed

A robots.txt file uses a few simple directives. They are easy to read, but you must write them carefully.

  • User-agent
  • Sets to which bot the rule applies
  • Disallow
  • Blocks crawling for a folder or path
  • Allow
  • Opens a specific path inside a blocked folder
  • Crawl-delay
  • Requests slower crawling for some bots (not all bots follow it)

A small mistake can block important pages, including key categories or core landing pages. That is why using a generator is safer than writing everything manually.

WordPress can create many URLs that do not help SEO, such as internal search pages, some archive pages, and parameter-based URLs. Blocking low-value areas helps crawlers spend more time on your main pages, blog posts, and product or service pages.

Even on smaller sites, a clean robots.txt file is a smart setup. It keeps your crawl rules organized as the site grows.

A sitemap helps search engines discover the pages you want crawled. Robots.txt controls where bots can go.

  • Sitemap improves discovery
  • Robots.txt controls crawling access

Most websites benefit from using both.

Robots.txt is simple, but it is not forgiving. One wrong rule can block key pages. This generator helps you build the file safely.

Set default access

Choose whether all bots can crawl your site by default.

Add your sitemap URL

Include your sitemap so crawlers can find your important pages faster.

Add disallowed paths carefully

Block only what you truly do not want crawled. Always start with a forward slash, like:

/admin/ or /search/

Review before publishing

Double-check that you did not block your homepage, blog, category pages, or main service pages.

Robots.txt is one part of technical SEO. These tools support the same goal and help you confirm everything is working correctly: