Recent Releases of scrapy
scrapy -
- Changed the values for
DOWNLOAD_DELAY(from0to1) andCONCURRENT_REQUESTS_PER_DOMAIN(from8to1) in the default project template. - Fixed several bugs in the engine initialization and exception handling logic.
- Allowed running tests with Twisted 25.5.0+ again and fixed test failures with lxml 6.0.0.
- Python
Published by wRAR 8 months ago
scrapy -
- Fixed a bug introduced in Scrapy 2.13.0 that caused results of request errbacks to be ignored when the errback was called because of a downloader error.
- Docs and error messages imporvements related to the Scrapy 2.13.0 default reactor change.
- Python
Published by wRAR 9 months ago
scrapy -
- The asyncio reactor is now enabled by default
- Replaced
start_requests()(sync) withstart()(async) and changed how it is iterated. - Added the
allow_offsiterequest meta key - Spider middlewares that don't support asynchronous spider output are deprecated
- Added a base class for universal spider middlewares
- Python
Published by wRAR 10 months ago
scrapy -
- Security bug fixes.
- Support for Twisted >= 23.8.0.
- Documentation improvements.
- Python
Published by Gallaecio about 2 years ago
scrapy - 2.9.0
- Per-domain download settings.
- Compatibility with new cryptography and new parsel.
- JMESPath selectors from the new parsel.
- Bug fixes.
- Python
Published by wRAR almost 3 years ago
scrapy -
This is a maintenance release, with minor features, bug fixes, and cleanups.
- Python
Published by Gallaecio about 3 years ago
scrapy - 2.7.0
- Added Python 3.11 support, dropped Python 3.6 support
- Improved support for asynchronous callbacks
- Asyncio support is enabled by default on new projects
- Output names of item fields can now be arbitrary strings
- Centralized request fingerprinting configuration is now possible
- Python
Published by wRAR over 3 years ago
scrapy - 1.8.2
Security bug fixes
When a
Requestobject with cookies defined gets a redirect response causing a newRequestobject to be scheduled, the cookies defined in the originalRequestobject are no longer copied into the newRequestobject.If you manually set the
Cookieheader on aRequestobject and the domain name of the redirect URL is not an exact match for the domain of the URL of the originalRequestobject, yourCookieheader is now dropped from the newRequestobject.The old behavior could be exploited by an attacker to gain access to your cookies. Please, see the cjvr-mfj7-j4j8 security advisory for more information.
Note: It is still possible to enable the sharing of cookies between different domains with a shared domain suffix (e.g.
example.comand any subdomain) by defining the shared domain suffix (e.g.example.com) as the cookie domain when defining your cookies. See the documentation of theRequestclass for more information.When the domain of a cookie, either received in the
Set-Cookieheader of a response or defined in aRequestobject, is set to apublic suffix <https://publicsuffix.org/>_, the cookie is now ignored unless the cookie domain is the same as the request domain.The old behavior could be exploited by an attacker to inject cookies from a controlled domain into your cookiejar that could be sent to other domains not controlled by the attacker. Please, see the mfjm-vh54-3f96 security advisory for more information.
- Python
Published by Gallaecio almost 4 years ago
scrapy - 2.6.0
- Security fixes for cookie handling (see details below)
- Python 3.10 support
- asyncio support is no longer considered experimental, and works out-of-the-box on Windows regardless of your Python version
- Feed exports now support
pathlib.Pathoutput paths and per-feed item filtering and post-processing
Security bug fixes
When a
Requestobject with cookies defined gets a redirect response causing a newRequestobject to be scheduled, the cookies defined in the originalRequestobject are no longer copied into the newRequestobject.If you manually set the
Cookieheader on aRequestobject and the domain name of the redirect URL is not an exact match for the domain of the URL of the originalRequestobject, yourCookieheader is now dropped from the newRequestobject.The old behavior could be exploited by an attacker to gain access to your cookies. Please, see the cjvr-mfj7-j4j8 security advisory for more information.
Note: It is still possible to enable the sharing of cookies between different domains with a shared domain suffix (e.g.
example.comand any subdomain) by defining the shared domain suffix (e.g.example.com) as the cookie domain when defining your cookies. See the documentation of theRequestclass for more information.When the domain of a cookie, either received in the
Set-Cookieheader of a response or defined in aRequestobject, is set to apublic suffix <https://publicsuffix.org/>_, the cookie is now ignored unless the cookie domain is the same as the request domain.The old behavior could be exploited by an attacker to inject cookies from a controlled domain into your cookiejar that could be sent to other domains not controlled by the attacker. Please, see the mfjm-vh54-3f96 security advisory for more information.
- Python
Published by Gallaecio almost 4 years ago
scrapy -
Security bug fix:
If you use HttpAuthMiddleware (i.e. the http_user and http_pass spider attributes) for HTTP authentication, any request exposes your credentials to the request target.
To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute, http_auth_domain, and point it to the specific domain to which the authentication credentials must be sent.
If the http_auth_domain spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.
If you need to send the same HTTP authentication credentials to multiple domains, you can use w3lib.http.basic_auth_header instead to set the value of the Authorization header of your requests.
If you really want your spider to send the same HTTP authentication credentials to any domain, set the http_auth_domain spider attribute to None.
Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.
- Python
Published by Gallaecio over 4 years ago
scrapy -
Security bug fix:
If you use HttpAuthMiddleware (i.e. the http_user and http_pass spider attributes) for HTTP authentication, any request exposes your credentials to the request target.
To prevent unintended exposure of authentication credentials to unintended domains, you must now additionally set a new, additional spider attribute, http_auth_domain, and point it to the specific domain to which the authentication credentials must be sent.
If the http_auth_domain spider attribute is not set, the domain of the first request will be considered the HTTP authentication target, and authentication credentials will only be sent in requests targeting that domain.
If you need to send the same HTTP authentication credentials to multiple domains, you can use w3lib.http.basic_auth_header instead to set the value of the Authorization header of your requests.
If you really want your spider to send the same HTTP authentication credentials to any domain, set the http_auth_domain spider attribute to None.
Finally, if you are a user of scrapy-splash, know that this version of Scrapy breaks compatibility with scrapy-splash 0.7.2 and earlier. You will need to upgrade scrapy-splash to a greater version for it to continue to work.
- Python
Published by Gallaecio over 4 years ago
scrapy - 2.4.1
Fixed feed exports overwrite support
Fixed the asyncio event loop handling, which could make code hang
Fixed the IPv6-capable DNS resolver
CachingHostnameResolverfor download handlers that callreactor.resolveFixed the output of the
genspidercommand showing placeholders instead of the import part of the generated spider module (issue 4874)
- Python
Published by Gallaecio over 5 years ago
scrapy - 2.4.0
Hihglights:
Python 3.5 support has been dropped.
The
file_pathmethod of media pipelines can now access the source item.This allows you to set a download file path based on item data.
The new
item_export_kwargskey of theFEEDSsetting allows to define keyword parameters to pass to item exporter classes.You can now choose whether feed exports overwrite or append to the output file.
For example, when using the
crawlorrunspidercommands, you can use the-Ooption instead of-oto overwrite the output file.Zstd-compressed responses are now supported if zstandard is installed.
In settings, where the import path of a class is required, it is now possible to pass a class object instead.
- Python
Published by Gallaecio over 5 years ago
scrapy - 2.3.0
Hihglights:
Feed exports now support Google Cloud Storage as a storage backend
The new
FEED_EXPORT_BATCH_ITEM_COUNTsetting allows to deliver output items in batches of up to the specified number of items.It also serves as a workaround for delayed file delivery, which causes Scrapy to only start item delivery after the crawl has finished when using certain storage backends (S3, FTP, and now GCS).
The base implementation of item loaders has been moved into a separate library, itemloaders, allowing usage from outside Scrapy and a separate release schedule
- Python
Published by Gallaecio over 5 years ago
scrapy - 2.2.1
The startproject command no longer makes unintended changes to the permissions of files in the destination folder, such as removing execution permissions.
- Python
Published by Gallaecio over 5 years ago
scrapy -
Highlights:
- Python 3.5.2+ is required now
- dataclass objects and attrs objects are now valid item types
- New
TextResponse.jsonmethod - New
bytes_receivedsignal that allows canceling response download CookiesMiddlewarefixes
- Python
Published by Gallaecio over 5 years ago
scrapy -
Highlights:
- New FEEDS setting to export to multiple feeds
- New Response.ip_address attribute
- Python
Published by Gallaecio almost 6 years ago
scrapy - 2.0.1
Response.follow_allnow supports an empty URL iterable as input (#4408, #4420)- Removed top-level reactor imports to prevent errors about the wrong Twisted reactor being installed when setting a different Twisted reactor using
TWISTED_REACTOR(#4401, #4406)
- Python
Published by Gallaecio almost 6 years ago
scrapy -
Highlights: - Python 2 support has been removed - Partial coroutine syntax support and experimental asyncio support - New Response.followall method - FTP support for media pipelines - New Response.certificate attribute - IPv6 support through DNSRESOLVER
- Python
Published by Gallaecio almost 6 years ago
scrapy -
Highlights:
- Better Windows support
- Python 3.7 compatibility
- Big documentation improvements, including a switch from .extract_first() + .extract() API to .get() + .getall() API
- Feed exports, FilePipeline and MediaPipeline improvements
- Better extensibility: itemerror and requestreacheddownloader signals; fromcrawler support for feed exporters, feed storages and dupefilters.
- scrapy.contracts fixes and new features
- Telnet console security improvements, first released as a backport in Scrapy 1.5.2 (2019-01-22)
- Clean-up of the deprecated code
- Various bug fixes, small new features and usability improvements across the codebase.
Full changelog is in the docs.
- Python
Published by dangra about 7 years ago
scrapy -
This release brings small new features and improvements across the codebase. Some highlights:
- Google Cloud Storage is supported in FilesPipeline and ImagesPipeline.
- Crawling with proxy servers becomes more efficient, as connections to proxies can be reused now.
- Warnings, exception and logging messages are improved to make debugging easier.
- scrapy parse command now allows to set custom request meta via --meta argument.
- Compatibility with Python 3.6, PyPy and PyPy3 is improved; PyPy and PyPy3 are now supported officially, by running tests on CI.
- Better default handling of HTTP 308, 522 and 524 status codes.
- Documentation is improved, as usual.
Full changelog is in the docs.
- Python
Published by redapple about 8 years ago
scrapy -
Release notes at https://doc.scrapy.org/en/latest/news.html#scrapy-1-3-3-2017-03-10
- Python
Published by dangra about 8 years ago
scrapy -
Release notes at https://doc.scrapy.org/en/latest/news.html#scrapy-1-4-0-2017-05-18
- Python
Published by dangra about 8 years ago
scrapy -
Bug fixes
- Fix a cryptic traceback when a pipeline fails on
open_spider()(#2011) - Fix embedded IPython shell variables (fixing #396 that re-appeared in 1.2.0, fixed in #2418)
- A couple of patches when dealing with robots.txt:
- handle (non-standard) relative sitemap URLs (#2390)
- handle non-ASCII URLs and User-Agents in Python 2 (#2373)
Documentation
- Document "download_latency" key in Request‘s meta dict (#2033)
- Remove page on (deprecated & unsupported) Ubuntu packages from ToC (#2335)
- A few fixed typos (#2346, #2369, #2369, #2380) and clarifications (#2354, #2325, #2414)
Other changes
- Advertize conda-forge as Scrapy’s official conda channel (#2387)
- More helpful error messages when trying to use .css() or .xpath() on non-Text Responses (#2264)
- startproject command now generates a sample middlewares.py file (#2335)
- Add more dependencies’ version info in scrapy version verbose output (#2404)
- Remove all *.pyc files from source distribution (#2386)
- Python
Published by redapple about 9 years ago
scrapy -
Bug fixes
- Include OpenSSL’s more permissive default ciphers when establishing TLS/SSL connections (#2314).
- Fix “Location” HTTP header decoding on non-ASCII URL redirects (#2321).
Documentation
- Fix JsonWriterPipeline example (#2302).
- Various notes: #2330 on spider names, #2329 on middleware methods processing order, #2327 on getting multi-valued HTTP headers as lists.
Other changes
- Removed www. from start_urls in built-in spider templates (#2299).
- Python
Published by redapple about 9 years ago
scrapy -
New Features
- New
FEED_EXPORT_ENCODINGsetting to customize the encoding used when writing items to a file. This can be used to turn off\uXXXXescapes in JSON output. This is also useful for those wanting something else than UTF-8 for XML or CSV output (#2034). startprojectcommand now supports an optional destination directory to override the default one based on the project name (#2005).- New
SCHEDULER_DEBUGsetting to log requests serialization failures (#1610). - JSON encoder now supports serialization of
setinstances (#2058). - Interpret
application/json-amazonui-streamingasTextResponse(#1503). scrapyis imported by default when using shell tools (shell,inspect_response) (#2248).
Bug fixes
- DefaultRequestHeaders middleware now runs before UserAgent middleware (#2088). Warning: this is technically backwards incompatible, though we consider this a bug fix.
- HTTP cache extension and plugins that use the
.scrapydata directory now work outside projects (#1581). Warning: this is technically backwards incompatible, though we consider this a bug fix. Selectordoes not allow passing bothresponseandtextanymore (#2153).- Fixed logging of wrong callback name with
scrapy parse(#2169). - Fix for an odd gzip decompression bug (#1606).
- Fix for selected callbacks when using
CrawlSpiderwithscrapy parse(#2225). - Fix for invalid JSON and XML files when spider yields no items (#872).
- Implement
flush()forStreamLoggeravoiding a warning in logs (#2125).
Refactoring
canonicalize_urlhas been moved tow3lib.url(#2168).
Tests & Requirements
Scrapy's new requirements baseline is Debian 8 "Jessie". It was previously Ubuntu 12.04 Precise. What this means in practice is that we run continuous integration tests with these (main) packages versions at a minimum: Twisted 14.0, pyOpenSSL 0.14, lxml 3.4.
Scrapy may very well work with older versions of these packages (the code base still has switches for older Twisted versions for example) but it is not guaranteed (because it's not tested anymore).
Documentation
- Grammar fixes: #2128, #1566.
- Download stats badge removed from README (#2160).
- New scrapy architecture diagram (#2165).
- Updated
Responseparameters documentation (#2197). - Reworded misleading
RANDOMIZE_DOWNLOAD_DELAYdescription (#2190). - Add StackOverflow as a support channel (#2257).
- Python
Published by redapple over 9 years ago