Skip to content

feat: add respect_robots_txt_file option #1162

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 17 commits into
base: master
Choose a base branch
from

Conversation

Mantisus
Copy link
Collaborator

Description

  • This PR implements automatic skipping of requests based on the robots.txt file. It works based on a new boolean flag in the crawler options called respect_robots_txt_file.

Issues

Testing

  • Add tests to check respect_robots_txt_file functioning in ‘EnqueueLinksFunction’ for crawlers
  • Add tests for RobotsTxtFile

@Mantisus Mantisus requested a review from Copilot April 17, 2025 23:41
Copy link

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR introduces a new boolean flag, respect_robots_txt_file, to automatically skip crawling disallowed URLs based on a site's robots.txt rules. Key changes include the addition of tests for robots.txt handling across multiple crawler implementations, integration of robots.txt checking in the crawling pipeline, and the implementation of a RobotsTxtFile utility.

Reviewed Changes

Copilot reviewed 12 out of 12 changed files in this pull request and generated 3 comments.

Show a summary per file
File Description
tests/unit/server_endpoints.py Added a static ROBOTS_TXT response to simulate a robots.txt file.
tests/unit/server.py Introduced a new endpoint to serve robots.txt and updated routing logic.
tests/unit/crawlers/_playwright/test_playwright_crawler.py Added tests verifying that the PlaywrightCrawler correctly respects robots.txt.
tests/unit/crawlers/_parsel/test_parsel_crawler.py Introduced tests for the ParselCrawler to validate robots.txt respect.
tests/unit/crawlers/_beautifulsoup/test_beautifulsoup_crawler.py Added tests to ensure BeautifulSoupCrawler adheres to robots.txt rules.
tests/unit/_utils/test_robots.py New tests for generating, parsing, and validating robots.txt file behavior.
src/crawlee/crawlers/_playwright/_playwright_crawler.py Integrated robots.txt enforcement in the link extraction logic.
src/crawlee/crawlers/_basic/_basic_crawler.py Updated request adding and session handling to respect robots.txt directives.
src/crawlee/crawlers/_abstract_http/_abstract_http_crawler.py Added robots.txt checking in link extraction for HTTP-based crawling.
src/crawlee/_utils/robots.py Implemented the RobotsTxtFile class for parsing and handling robots.txt data.
pyproject.toml Added dependency for protego to support robots.txt parsing.

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@@ -40,6 +40,7 @@ dependencies = [
"eval-type-backport>=0.2.0",
"httpx[brotli,http2,zstd]>=0.27.0",
"more-itertools>=10.2.0",
"protego>=0.4.0",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's fun to see another scrapy project here, but I guess that it guarantees some stability, so... all good.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I was planning to use RobotFileParser, but it doesn't support Google's specification. 😞

self._robots = robots
self._original_url = URL(url).origin()

@staticmethod
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd prefer using @classmethod and the Self return type annotation

robots_txt_file = await self._get_robots_txt_file_for_url(url)
return not robots_txt_file or robots_txt_file.is_allowed(url)

async def _get_robots_txt_file_for_url(self, url: str) -> RobotsTxtFile | None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe we should use some synchronization mechanism so that we don't fetch the same robots.txt file multiple times in parallel.

Copy link
Collaborator

@vdusek vdusek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice! I have a few details... And also, could you please write a new guide/example regarding this feature?


from typing import TYPE_CHECKING

from protego import Protego # type: ignore[import-untyped]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we please rather update the project toml rather than using a type ignore?

"""Create a RobotsTxtFile instance from the given content.
Args:
url: the URL of the robots.txt file
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you please use sentences in arg description? (applies to all occurencies)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants