You can start multiple spider instances that share a single redis queue. Best suitable for broad multi-domain crawls. Scraped items gets pushed into a redis queued meaning that you can start as many as needed post-processing processes sharing the items queue. Scheduler + Duplication Filter, Item Pipeline, Base Spiders. Default requests serializer is pickle, but it can be changed to any module with loads and dumps functions. Note that pickle is not compatible between python versions. Version 0.3 changed the requests serialization from marshal to cPickle, therefore persisted requests using version 0.2 will not able to work on 0.3. The class scrapy_redis.spiders.RedisSpider enables a spider to read the urls from redis. The urls in the redis queue will be processed one after another, if the first request yields more requests, the spider will process those requests before fetching another url from redis.

Features

  • Distributed crawling/scraping
  • Distributed post-processing
  • Scrapy plug-and-play components
  • Python 2.7, 3.4 or 3.5 required
  • Redis >= 2.8 required
  • Scheduler + Duplication Filter, Item Pipeline, Base Spiders

Project Samples

Project Activity

See All Activity >

License

MIT License

Follow Scrapy-Redis

Scrapy-Redis Web Site

Other Useful Business Software
Gemini 3 and 200+ AI Models on One Platform Icon
Gemini 3 and 200+ AI Models on One Platform

Access Google's best plus Claude, Llama, and Gemma. Fine-tune and deploy from one console.

Build generative AI apps with Vertex AI. Switch between models without switching platforms.
Start Free
Rate This Project
Login To Rate This Project

User Reviews

Be the first to post a review of Scrapy-Redis!

Additional Project Details

Programming Language

Python

Related Categories

Python Browsers, Python Web Scrapers

Registered

2021-11-09