The robots exclusion standard or robots.txt protocol is a convention to prevent well-behaved web spiders and other web robots from accessing all or part of a website. The information specifying the parts that should not be accessed is specified in a file called robots.txt in the home directory of the website.

The robots.txt protocol is purely advisory, and relies on the cooperation of the web robot, so that marking an area of your site out of bounds with robots.txt does not guarantee privacy. Many web site administrators have been caught out trying to use the robots file to make private parts of a website invisible to the rest of the world. However the file is necessarily publicly available and is easily checked by anyone with a web browser.

The robots.txt patterns are matched by simple substring comparisons, so care should be taken to make sure that patterns matching directories have the final '/' character appended: otherwise all files with names starting with that substring will match, rather than just those in the directory intended.

There is also a convention for HTML meta tags that may be used to exclude robots according to the contents of web pages. Again, this is purely advisory, and also relies on the cooperation of the robot programs. For example, the HTML code for this page includes the line

near the top, which tells search engines like Google to include this page in its index and follow all links on this page for further possible indexing.

External links