我有一个非常基本的scrapy spider,它从文件中抓取url并下载它们。 唯一的问题是,他们中的一些被重定向到同一个域内稍微修改过的url。 我想在使用response.meta的回调函数中获取它们,并且它在一个普通的url上工作,但是url被重定向回调似乎没有被调用。 我该如何解决它? 这是我的代码。
from scrapy.contrib.spiders import CrawlSpider from scrapy import log from scrapy import Request class DmozSpider(CrawlSpider): name = "dmoz" handle_httpstatus_list = [302] allowed_domains = ["http://www.exmaple.net/"]) f = open("C:\\python27\\1a.csv",'r') url = 'http://www.exmaple.net/Query?indx=' start_urls = [url+row for row in f.readlines()] def parse(self, response): print response.meta.get('redirect_urls', [response.url]) print response.status print (response.headers.get('Location'))我也尝试过这样的事情:
def parse(self, response): return Request(response.url, meta={'dont_redirect': True, 'handle_httpstatus_list': [302]}, callback=self.parse_my_url) def parse_my_url(self, response): print response.status print (response.headers.get('Location'))而且它也不起作用。
I have a very basic scrapy spider, which grabs urls from the file and then downloads them. The only problem is that some of them got redirected to a slightly modified url within same domain. I want to get them in my callback function using response.meta, and it works on a normal urls, but then url is redirected callback doesn't seem to get called. How can I fix it? Here's my code.
from scrapy.contrib.spiders import CrawlSpider from scrapy import log from scrapy import Request class DmozSpider(CrawlSpider): name = "dmoz" handle_httpstatus_list = [302] allowed_domains = ["http://www.exmaple.net/"]) f = open("C:\\python27\\1a.csv",'r') url = 'http://www.exmaple.net/Query?indx=' start_urls = [url+row for row in f.readlines()] def parse(self, response): print response.meta.get('redirect_urls', [response.url]) print response.status print (response.headers.get('Location'))I've also tried something like that:
def parse(self, response): return Request(response.url, meta={'dont_redirect': True, 'handle_httpstatus_list': [302]}, callback=self.parse_my_url) def parse_my_url(self, response): print response.status print (response.headers.get('Location'))And it doesn't work either.
最满意答案
默认情况下,scrapy请求会被重定向,但如果您不想重定向,则可以使用start_requests方法并在请求元中添加标志。
def start_requests(self): requests =[(Request(self.url+u, meta={'handle_httpstatus_list': [302], 'dont_redirect': True}, callback=self.parse)) for u in self.start_urls] return requestsBy default scrapy requests are redirected, although if you don't want to redirect you can do like this, use start_requests method and add flags in request meta.
def start_requests(self): requests =[(Request(self.url+u, meta={'handle_httpstatus_list': [302], 'dont_redirect': True}, callback=self.parse)) for u in self.start_urls] return requests更多推荐
发布评论