每次我运行我的代码时,我的 ip 都会被禁止.我需要帮助将每个请求延迟 10 秒.我试图在代码中放置 DOWNLOAD_DELAY 但它没有给出任何结果.任何帮助表示赞赏.
every time i run my code my ip gets banned. I need help to delay each request for 10 seconds. I've tried to place DOWNLOAD_DELAY in code but it gives no results. Any help is appreciated.
# item class included here class DmozItem(scrapy.Item): # define the fields for your item here like: link = scrapy.Field() attr = scrapy.Field() class DmozSpider(scrapy.Spider): name = "dmoz" allowed_domains = ["craigslist"] start_urls = [ "washingtondc.craigslist/search/fua" ] BASE_URL = 'washingtondc.craigslist/' def parse(self, response): links = response.xpath('//a[@class="hdrlnk"]/@href').extract() for link in links: absolute_url = self.BASE_URL + link yield scrapy.Request(absolute_url, callback=self.parse_attr) def parse_attr(self, response): match = re.search(r"(\w+)\.html", response.url) if match: item_id = match.group(1) url = self.BASE_URL + "reply/nos/vgm/" + item_id item = DmozItem() item["link"] = response.url return scrapy.Request(url, meta={'item': item}, callback=self.parse_contact) def parse_contact(self, response): item = response.meta['item'] item["attr"] = "".join(response.xpath("//div[@class='anonemail']//text()").extract()) return item 推荐答案您需要设置 DOWNLOAD_DELAY 在你项目的 settings.py 中.请注意,您可能还需要限制并发.默认并发数为 8,因此您正在访问具有 8 个同时请求的网站.
You need to set DOWNLOAD_DELAY in settings.py of your project. Note that you may also need to limit concurrency. By default concurrency is 8 so you are hitting website with 8 simultaneous requests.
# settings.py DOWNLOAD_DELAY = 1 CONCURRENT_REQUESTS_PER_DOMAIN = 2从 Scrapy 1.0 开始,您还可以在蜘蛛中放置自定义设置,因此您可以执行以下操作这个:
Starting with Scrapy 1.0 you can also place custom settings in spider, so you could do something like this:
class DmozSpider(Spider): name = "dmoz" allowed_domains = ["dmoz"] start_urls = [ "www.dmoz/Computers/Programming/Languages/Python/Books/", "www.dmoz/Computers/Programming/Languages/Python/Resources/", ] custom_settings = { "DOWNLOAD_DELAY": 5, "CONCURRENT_REQUESTS_PER_DOMAIN": 2 }延迟和并发是按下载器插槽设置的,而不是按请求设置的.要实际检查您有什么下载,您可以尝试这样的操作
Delay and concurrency are set per downloader slot not per requests. To actually check what download you have you could try something like this
def parse(self, response): """ """ delay = self.crawler.engine.downloader.slots["www.dmoz"].delay concurrency = self.crawler.engine.downloader.slots["www.dmoz"].concurrency self.log("Delay {}, concurrency {} for request {}".format(delay, concurrency, response.request)) return更多推荐
Scrapy 延迟请求
发布评论