本文介绍了多线程的Python Web Crawler被卡住了的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在编写一个Python网络爬虫程序,我想让它成为多线程的。现在我已经完成了基本部分,下面是它的功能:
线程从队列获取URL;
该线程从页面提取链接,检查链接是否存在于池(集合)中,并将新链接放入队列和池;
该线程将URL和http响应写入CSV文件。
但当我运行爬虫程序时,它最终总是被卡住,没有正确退出。我已经翻阅了Python的官方文档,但仍然一无所知。
代码如下:
#!/usr/bin/env python #!coding=utf-8 import requests, re, urlparse import threading from Queue import Queue from bs4 import BeautifulSoup #custom modules and files from setting import config class Page: def __init__(self, url): self.url = url self.status = "" self.rawdata = "" self.error = False r = "" try: r = requests.get(self.url, headers={'User-Agent': 'random spider'}) except requests.exceptions.RequestException as e: self.status = e self.error = True else: if not r.history: self.status = r.status_code else: self.status = r.history[0] self.rawdata = r def outlinks(self): self.outlinks = [] #links, contains URL, anchor text, nofollow raw = self.rawdata.text.lower() soup = BeautifulSoup(raw) outlinks = soup.find_all('a', href=True) for link in outlinks: d = {"follow":"yes"} d['url'] = urlparse.urljoin(self.url, link.get('href')) d['anchortext'] = link.text if link.get('rel'): if "nofollow" in link.get('rel'): d["follow"] = "no" if d not in self.outlinks: self.outlinks.append(d) pool = Queue() exist = set() thread_num = 10 lock = threading.Lock() output = open("final.csv", "a") #the domain is the start point domain = config["domain"] pool.put(domain) exist.add(domain) def crawl(): while True: p = Page(pool.get()) #write data to output file lock.acquire() output.write(p.url+" "+str(p.status)+" ") print "%s crawls %s" % (threading.currentThread().getName(), p.url) lock.release() if not p.error: p.outlinks() outlinks = p.outlinks if urlparse.urlparse(p.url)[1] == urlparse.urlparse(domain)[1] : for link in outlinks: if link['url'] not in exist: lock.acquire() pool.put(link['url']) exist.add(link['url']) lock.release() pool.task_done() for i in range(thread_num): t = threading.Thread(target = crawl) t.start() pool.join() output.close()如有任何帮助,我们将不胜感激!
谢谢
马库斯
推荐答案爬网函数有一个无限的While循环,没有可能的退出路径。 条件True的计算结果始终为True,循环继续,如您所说
未正确退出
修改爬网函数的While循环以包括条件。例如,当保存到CSV文件的链接数量超过某个最小数量时,则退出While循环。即
def crawl(): while len(exist) <= min_links: ...更多推荐
多线程的Python Web Crawler被卡住了
发布评论