Recent Releases of scrapy

scrapy -

  • Changed the values for DOWNLOAD_DELAY (from 0 to 1) and CONCURRENT_REQUESTS_PER_DOMAIN (from 8 to 1) in the default project template.
  • Fixed several bugs in the engine initialization and exception handling logic.
  • Allowed running tests with Twisted 25.5.0+ again and fixed test failures with lxml 6.0.0.

See the full changelog

- Python
Published by wRAR 8 months ago

scrapy -

  • Fixed a bug introduced in Scrapy 2.13.0 that caused results of request errbacks to be ignored when the errback was called because of a downloader error.
  • Docs and error messages imporvements related to the Scrapy 2.13.0 default reactor change.

- Python
Published by wRAR 9 months ago

scrapy - 2.13.1

  • Give callback requests precedence over start requests when priority values are the same.

- Python
Published by wRAR 9 months ago

scrapy -

  • The asyncio reactor is now enabled by default
  • Replaced start_requests() (sync) with start() (async) and changed how it is iterated.
  • Added the allow_offsite request meta key
  • Spider middlewares that don't support asynchronous spider output are deprecated
  • Added a base class for universal spider middlewares

See the full changelog

- Python
Published by wRAR 10 months ago

scrapy - 2.12.0

  • Dropped support for Python 3.8, added support for Python 3.13
  • start_requests can now yield items
  • Added scrapy.http.JsonResponse
  • Added the CLOSESPIDER_PAGECOUNT_NO_ITEM setting

See the full changelog.

- Python
Published by wRAR over 1 year ago

scrapy - 2.11.2

Mostly bug fixes, including security bug fixes.

See the full changelog.

- Python
Published by Gallaecio almost 2 years ago

scrapy - 1.8.4

Security bug fixes.

See the full changelog.

- Python
Published by Gallaecio about 2 years ago

scrapy -

  • Security bug fixes.
  • Support for Twisted >= 23.8.0.
  • Documentation improvements.

See the full changelog.

- Python
Published by Gallaecio about 2 years ago

scrapy - 2.11.0

  • Spiders can now modify settings in their from_crawler methods, e.g. based on spider arguments.
  • Periodic logging of stats.
  • Bug fixes.

See the full changelog.

- Python
Published by wRAR over 2 years ago

scrapy - 2.10.1

Marked Twisted >= 23.8.0 as unsupported.

- Python
Published by wRAR over 2 years ago

scrapy - 2.10.0

  • Added Python 3.12 support, dropped Python 3.7 support.
  • The new add-ons framework simplifies configuring 3rd-party components that support it.
  • Exceptions to retry can now be configured.
  • Many fixes and improvements for feed exports.

See the full changelog.

- Python
Published by wRAR over 2 years ago

scrapy - 2.9.0

  • Per-domain download settings.
  • Compatibility with new cryptography and new parsel.
  • JMESPath selectors from the new parsel.
  • Bug fixes.

See the full changelog.

- Python
Published by wRAR almost 3 years ago

scrapy -

This is a maintenance release, with minor features, bug fixes, and cleanups.

See the full changelog.

- Python
Published by Gallaecio about 3 years ago

scrapy - 2.7.1

  • Relaxed the restriction introduced in 2.6.2 so that the Proxy-Authentication header can again be set explicitly in certain cases, restoring compatibility with scrapy-zyte-smartproxy 2.1.0 and older
  • Bug fixes

See the full changelog

- Python
Published by wRAR over 3 years ago

scrapy - 2.7.0

See the full changelog

- Python
Published by wRAR over 3 years ago

scrapy - 2.6.3

Makes pip install Scrapy work again.

It required making changes to support pyOpenSSL 22.1.0. We had to drop support for SSLv3 as a result.

We also upgraded the minimum versions of some dependencies.

See the changelog.

- Python
Published by Gallaecio over 3 years ago

scrapy - 2.6.2

Fixes a security issue around HTTP proxy usage, and addresses a few regressions introduced in Scrapy 2.6.0.

See the changelog.

- Python
Published by Gallaecio over 3 years ago

scrapy - 1.8.3

Fixes a security issue around HTTP proxy usage. See the changelog for details.

- Python
Published by Gallaecio over 3 years ago

scrapy - 1.8.2

Security bug fixes

  • When a Request object with cookies defined gets a redirect response causing a new Request object to be scheduled, the cookies defined in the original Request object are no longer copied into the new Request object.

    If you manually set the Cookie header on a Request object and the domain name of the redirect URL is not an exact match for the domain of the URL of the original Request object, your Cookie header is now dropped from the new Request object.

    The old behavior could be exploited by an attacker to gain access to your cookies. Please, see the cjvr-mfj7-j4j8 security advisory for more information.

    Note: It is still possible to enable the sharing of cookies between different domains with a shared domain suffix (e.g. example.com and any subdomain) by defining the shared domain suffix (e.g. example.com) as the cookie domain when defining your cookies. See the documentation of the Request class for more information.

  • When the domain of a cookie, either received in the Set-Cookie header of a response or defined in a Request object, is set to a public suffix <https://publicsuffix.org/>_, the cookie is now ignored unless the cookie domain is the same as the request domain.

    The old behavior could be exploited by an attacker to inject cookies from a controlled domain into your cookiejar that could be sent to other domains not controlled by the attacker. Please, see the mfjm-vh54-3f96 security advisory for more information.

- Python
Published by Gallaecio almost 4 years ago

scrapy - 2.6.1

Fixes a regression introduced in 2.6.0 that would unset the request method when following redirects.

- Python
Published by Gallaecio almost 4 years ago

scrapy - 2.6.0

  • Security fixes for cookie handling (see details below)
  • Python 3.10 support
  • asyncio support is no longer considered experimental, and works out-of-the-box on Windows regardless of your Python version
  • Feed exports now support pathlib.Path output paths and per-feed item filtering and post-processing

See the full changelog

Security bug fixes

  • When a Request object with cookies defined gets a redirect response causing a new Request object to be scheduled, the cookies defined in the original Request object are no longer copied into the new Request object.

    If you manually set the Cookie header on a Request object and the domain name of the redirect URL is not an exact match for the domain of the URL of the original Request object, your Cookie header is now dropped from the new Request object.

    The old behavior could be exploited by an attacker to gain access to your cookies. Please, see the cjvr-mfj7-j4j8 security advisory for more information.

    Note: It is still possible to enable the sharing of cookies between different domains with a shared domain suffix (e.g. example.com and any subdomain) by defining the shared domain suffix (e.g. example.com) as the cookie domain when defining your cookies. See the documentation of the Request class for more information.

  • When the domain of a cookie, either received in the Set-Cookie header of a response or defined in a Request object, is set to a public suffix <https://publicsuffix.org/>_, the cookie is now ignored unless the cookie domain is the same as the request domain.

    The old behavior could be exploited by an attacker to inject cookies from a controlled domain into your cookiejar that could be sent to other domains not controlled by the attacker. Please, see the mfjm-vh54-3f96 security advisory for more information.

- Python
Published by Gallaecio almost 4 years ago

scrapy -

Security bug fix:

If you use HttpAuthMiddleware (i.e. the http_user and http_pass spider attributes) for HTTP authentication, any request exposes your credentials to the request target.

To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute, http_auth_domain, and point it to the specific domain to which the authentication credentials must be sent.

If the http_auth_domain spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.

If you need to send the same HTTP authentication credentials to multiple domains, you can use w3lib.http.basic_auth_header instead to set the value of the Authorization header of your requests.

If you really want your spider to send the same HTTP authentication credentials to any domain, set the http_auth_domain spider attribute to None.

Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.

- Python
Published by Gallaecio over 4 years ago

scrapy -

Security bug fix:

If you use HttpAuthMiddleware (i.e. the http_user and http_pass spider attributes) for HTTP authentication, any request exposes your credentials to the request target.

To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute, http_auth_domain, and point it to the specific domain to which the authentication credentials must be sent.

If the http_auth_domain spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.

If you need to send the same HTTP authentication credentials to multiple domains, you can use w3lib.http.basic_auth_header instead to set the value of the Authorization header of your requests.

If you really want your spider to send the same HTTP authentication credentials to any domain, set the http_auth_domain spider attribute to None.

Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.

- Python
Published by Gallaecio over 4 years ago

scrapy - 2.5.0

  • Official Python 3.9 support
  • Experimental HTTP/2 support
  • New getretryrequest() function to retry requests from spider callbacks
  • New headers_received signal that allows stopping downloads early
  • New Response.protocol attribute

See the full changelog

- Python
Published by wRAR almost 5 years ago

scrapy - 2.4.1

  • Fixed feed exports overwrite support

  • Fixed the asyncio event loop handling, which could make code hang

  • Fixed the IPv6-capable DNS resolver CachingHostnameResolver for download handlers that call reactor.resolve

  • Fixed the output of the genspider command showing placeholders instead of the import part of the generated spider module (issue 4874)

- Python
Published by Gallaecio over 5 years ago

scrapy - 2.4.0

Hihglights:

  • Python 3.5 support has been dropped.

  • The file_path method of media pipelines can now access the source item.

    This allows you to set a download file path based on item data.

  • The new item_export_kwargs key of the FEEDS setting allows to define keyword parameters to pass to item exporter classes.

  • You can now choose whether feed exports overwrite or append to the output file.

    For example, when using the crawl or runspider commands, you can use the -O option instead of -o to overwrite the output file.

  • Zstd-compressed responses are now supported if zstandard is installed.

  • In settings, where the import path of a class is required, it is now possible to pass a class object instead.

See the full changelog

- Python
Published by Gallaecio over 5 years ago

scrapy - 2.3.0

Hihglights:

See the full changelog

- Python
Published by Gallaecio over 5 years ago

scrapy - 2.2.1

The startproject command no longer makes unintended changes to the permissions of files in the destination folder, such as removing execution permissions.

- Python
Published by Gallaecio over 5 years ago

scrapy -

Highlights:

See the full changelog

- Python
Published by Gallaecio over 5 years ago

scrapy -

Highlights: - New FEEDS setting to export to multiple feeds - New Response.ip_address attribute

See the full changelog

- Python
Published by Gallaecio almost 6 years ago

scrapy - 2.0.1

  • Response.follow_all now supports an empty URL iterable as input (#4408, #4420)
  • Removed top-level reactor imports to prevent errors about the wrong Twisted reactor being installed when setting a different Twisted reactor using TWISTED_REACTOR (#4401, #4406)

- Python
Published by Gallaecio almost 6 years ago

scrapy -

Highlights: - Python 2 support has been removed - Partial coroutine syntax support and experimental asyncio support - New Response.followall method - FTP support for media pipelines - New Response.certificate attribute - IPv6 support through DNSRESOLVER

See the full changelog

- Python
Published by Gallaecio almost 6 years ago

scrapy - 1.7.4

Revert the fix for #3804 (#3819), which has a few undesired side effects (#3897, #3976).

- Python
Published by Gallaecio over 6 years ago

scrapy - 1.7.3

Enforce lxml 4.3.5 or lower for Python 3.4 (#3912, #3918)

- Python
Published by Gallaecio over 6 years ago

scrapy - 1.7.2

Fix Python 2 support (#3889, #3893, #3896)

- Python
Published by Gallaecio over 6 years ago

scrapy - 1.7.0

Highlights:

  • Improvements for crawls targeting multiple domains
  • A cleaner way to pass arguments to callbacks
  • A new class for JSON requests
  • Improvements for rule-based spiders
  • New features for feed exports

See the full change log

- Python
Published by Gallaecio over 6 years ago

scrapy -

Highlights:

  • Better Windows support
  • Python 3.7 compatibility
  • Big documentation improvements, including a switch from .extract_first() + .extract() API to .get() + .getall() API
  • Feed exports, FilePipeline and MediaPipeline improvements
  • Better extensibility: itemerror and requestreacheddownloader signals; fromcrawler support for feed exporters, feed storages and dupefilters.
  • scrapy.contracts fixes and new features
  • Telnet console security improvements, first released as a backport in Scrapy 1.5.2 (2019-01-22)
  • Clean-up of the deprecated code
  • Various bug fixes, small new features and usability improvements across the codebase.

Full changelog is in the docs.

- Python
Published by dangra about 7 years ago

scrapy -

This release brings small new features and improvements across the codebase. Some highlights:

  • Google Cloud Storage is supported in FilesPipeline and ImagesPipeline.
  • Crawling with proxy servers becomes more efficient, as connections to proxies can be reused now.
  • Warnings, exception and logging messages are improved to make debugging easier.
  • scrapy parse command now allows to set custom request meta via --meta argument.
  • Compatibility with Python 3.6, PyPy and PyPy3 is improved; PyPy and PyPy3 are now supported officially, by running tests on CI.
  • Better default handling of HTTP 308, 522 and 524 status codes.
  • Documentation is improved, as usual.

Full changelog is in the docs.

- Python
Published by redapple about 8 years ago

scrapy -

Release notes at https://doc.scrapy.org/en/latest/news.html#scrapy-1-3-3-2017-03-10

- Python
Published by dangra about 8 years ago

scrapy -

Release notes at https://doc.scrapy.org/en/latest/news.html#scrapy-1-4-0-2017-05-18

- Python
Published by dangra about 8 years ago

scrapy -

Bug fixes

  • Fix a cryptic traceback when a pipeline fails on open_spider() (#2011)
  • Fix embedded IPython shell variables (fixing #396 that re-appeared in 1.2.0, fixed in #2418)
  • A couple of patches when dealing with robots.txt:
    • handle (non-standard) relative sitemap URLs (#2390)
    • handle non-ASCII URLs and User-Agents in Python 2 (#2373)

Documentation

  • Document "download_latency" key in Request‘s meta dict (#2033)
  • Remove page on (deprecated & unsupported) Ubuntu packages from ToC (#2335)
  • A few fixed typos (#2346, #2369, #2369, #2380) and clarifications (#2354, #2325, #2414)

Other changes

  • Advertize conda-forge as Scrapy’s official conda channel (#2387)
  • More helpful error messages when trying to use .css() or .xpath() on non-Text Responses (#2264)
  • startproject command now generates a sample middlewares.py file (#2335)
  • Add more dependencies’ version info in scrapy version verbose output (#2404)
  • Remove all *.pyc files from source distribution (#2386)

- Python
Published by redapple about 9 years ago

scrapy -

Bug fixes

  • Include OpenSSL’s more permissive default ciphers when establishing TLS/SSL connections (#2314).
  • Fix “Location” HTTP header decoding on non-ASCII URL redirects (#2321).

Documentation

  • Fix JsonWriterPipeline example (#2302).
  • Various notes: #2330 on spider names, #2329 on middleware methods processing order, #2327 on getting multi-valued HTTP headers as lists.

Other changes

  • Removed www. from start_urls in built-in spider templates (#2299).

- Python
Published by redapple about 9 years ago

scrapy -

New Features

  • New FEED_EXPORT_ENCODING setting to customize the encoding used when writing items to a file. This can be used to turn off \uXXXX escapes in JSON output. This is also useful for those wanting something else than UTF-8 for XML or CSV output (#2034).
  • startproject command now supports an optional destination directory to override the default one based on the project name (#2005).
  • New SCHEDULER_DEBUG setting to log requests serialization failures (#1610).
  • JSON encoder now supports serialization of set instances (#2058).
  • Interpret application/json-amazonui-streaming as TextResponse (#1503).
  • scrapy is imported by default when using shell tools (shell, inspect_response) (#2248).

Bug fixes

  • DefaultRequestHeaders middleware now runs before UserAgent middleware (#2088). Warning: this is technically backwards incompatible, though we consider this a bug fix.
  • HTTP cache extension and plugins that use the .scrapy data directory now work outside projects (#1581). Warning: this is technically backwards incompatible, though we consider this a bug fix.
  • Selector does not allow passing both response and text anymore (#2153).
  • Fixed logging of wrong callback name with scrapy parse (#2169).
  • Fix for an odd gzip decompression bug (#1606).
  • Fix for selected callbacks when using CrawlSpider with scrapy parse (#2225).
  • Fix for invalid JSON and XML files when spider yields no items (#872).
  • Implement flush() for StreamLogger avoiding a warning in logs (#2125).

Refactoring

  • canonicalize_url has been moved to w3lib.url (#2168).

Tests & Requirements

Scrapy's new requirements baseline is Debian 8 "Jessie". It was previously Ubuntu 12.04 Precise. What this means in practice is that we run continuous integration tests with these (main) packages versions at a minimum: Twisted 14.0, pyOpenSSL 0.14, lxml 3.4.

Scrapy may very well work with older versions of these packages (the code base still has switches for older Twisted versions for example) but it is not guaranteed (because it's not tested anymore).

Documentation

  • Grammar fixes: #2128, #1566.
  • Download stats badge removed from README (#2160).
  • New scrapy architecture diagram (#2165).
  • Updated Response parameters documentation (#2197).
  • Reworded misleading RANDOMIZE_DOWNLOAD_DELAY description (#2190).
  • Add StackOverflow as a support channel (#2257).

- Python
Published by redapple over 9 years ago