foreword
robots.txt contains some permission configurations for search engine crawling on our website, which ones can be crawled and which ones cannot.
Operating procedures
1. Create a robots.txt file under the static folder
2. The content of the file is
- The # here is equivalent to //, the meaning of the comment
- User-agent: * The asterisk wildcard means that the browser can proxy the URL, or specify a browser
示例: User-agent: Google
- Disallow: Not writing means that crawlers are allowed to crawl all the content of our website, and writing means that the specified pages are not crawled.
示例:
Disallow: /joe/junk.html
Disallow: /joe/foo.html
- You can also set the Sitemap site map, which is the most important core point of SEO. It is used to include all the pages of our website, and the website ranks higher. If your website does not have a site map, you can not write the Sitemap attribute. As for nuxt How to generate a sitemap sitemap I will start an article again.
- For more detailed configuration, see the official documents of robots
User-agent: *
Disallow:
Sitemap: 你的网址/sitemap.xml