Scrapy 延迟请求

编程入门 行业动态 更新时间:2024-10-28 05:21:13
本文介绍了Scrapy 延迟请求的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

每次我运行我的代码时,我的 ip 都会被禁止.我需要帮助将每个请求延迟 10 秒.我试图在代码中放置 DOWNLOAD_DELAY 但它没有给出任何结果.任何帮助表示赞赏.

every time i run my code my ip gets banned. I need help to delay each request for 10 seconds. I've tried to place DOWNLOAD_DELAY in code but it gives no results. Any help is appreciated.

# item class included here class DmozItem(scrapy.Item): # define the fields for your item here like: link = scrapy.Field() attr = scrapy.Field() class DmozSpider(scrapy.Spider): name = "dmoz" allowed_domains = ["craigslist"] start_urls = [ "washingtondc.craigslist/search/fua" ] BASE_URL = 'washingtondc.craigslist/' def parse(self, response): links = response.xpath('//a[@class="hdrlnk"]/@href').extract() for link in links: absolute_url = self.BASE_URL + link yield scrapy.Request(absolute_url, callback=self.parse_attr) def parse_attr(self, response): match = re.search(r"(\w+)\.html", response.url) if match: item_id = match.group(1) url = self.BASE_URL + "reply/nos/vgm/" + item_id item = DmozItem() item["link"] = response.url return scrapy.Request(url, meta={'item': item}, callback=self.parse_contact) def parse_contact(self, response): item = response.meta['item'] item["attr"] = "".join(response.xpath("//div[@class='anonemail']//text()").extract()) return item

推荐答案

您需要设置 DOWNLOAD_DELAY 在你项目的 settings.py 中.请注意,您可能还需要限制并发.默认并发数为 8,因此您正在访问具有 8 个同时请求的网站.

You need to set DOWNLOAD_DELAY in settings.py of your project. Note that you may also need to limit concurrency. By default concurrency is 8 so you are hitting website with 8 simultaneous requests.

# settings.py DOWNLOAD_DELAY = 1 CONCURRENT_REQUESTS_PER_DOMAIN = 2

从 Scrapy 1.0 开始,您还可以在蜘蛛中放置自定义设置,因此您可以执行以下操作这个:

Starting with Scrapy 1.0 you can also place custom settings in spider, so you could do something like this:

class DmozSpider(Spider): name = "dmoz" allowed_domains = ["dmoz"] start_urls = [ "www.dmoz/Computers/Programming/Languages/Python/Books/", "www.dmoz/Computers/Programming/Languages/Python/Resources/", ] custom_settings = { "DOWNLOAD_DELAY": 5, "CONCURRENT_REQUESTS_PER_DOMAIN": 2 }

延迟和并发是按下载器插槽设置的,而不是按请求设置的.要实际检查您有什么下载,您可以尝试这样的操作

Delay and concurrency are set per downloader slot not per requests. To actually check what download you have you could try something like this

def parse(self, response): """ """ delay = self.crawler.engine.downloader.slots["www.dmoz"].delay concurrency = self.crawler.engine.downloader.slots["www.dmoz"].concurrency self.log("Delay {}, concurrency {} for request {}".format(delay, concurrency, response.request)) return

更多推荐

Scrapy 延迟请求

本文发布于:2023-10-11 17:54:09,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1482407.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:Scrapy

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!