三分钟带你学会爬虫

编程入门 行业动态 更新时间:2024-10-10 07:29:39

三分钟带你学会<a href=https://www.elefans.com/category/jswz/34/1770264.html style=爬虫"/>

三分钟带你学会爬虫

## 深度学习模型训练极大的依赖数据,当数据量不够时,可通过网络爬虫从网上爬取数据。下面以爬取刘亦菲和刘德华图片数据为例介绍爬虫:

## 代码使用步骤如下:

1. 在name.txt中输入你要爬取的关键词,我输入的为刘亦菲和刘德华(不用创建文件夹,文件夹是代码生成的)。

刘亦菲
刘德华
import re
import requests
from urllib import error
from bs4 import BeautifulSoup
import osnum = 0
numPicture = 0
file = ''
List = []def Find(url):global Listprint('正在检测图片总数,请稍等.....')t = 0i = 1s = 0while t < 1000:Url = url + str(t)try:Result = requests.get(Url, timeout=7)except BaseException:t = t + 60#import pdb;pdb.set_trace()continueelse:result = Result.textpic_url = re.findall('"objURL":"(.*?)",', result, re.S)  # 先利用正则表达式找到图片urls += len(pic_url)if len(pic_url) == 0:#import pdb;pdb.set_trace()breakelse:List.append(pic_url)t = t + 60return sdef recommend(url):Re = []try:html = requests.get(url)except error.HTTPError as e:returnelse:html.encoding = 'utf-8'bsObj = BeautifulSoup(html.text, 'html.parser')div = bsObj.find('div', id='topRS')if div is not None:listA = div.findAll('a')for i in listA:if i is not None:Re.append(i.get_text())return Redef dowmloadPicture(html, keyword):global num# t =0pic_url = re.findall('"objURL":"(.*?)",', html, re.S)  # 先利用正则表达式找到图片urlprint('找到关键词:' + keyword + '的图片,即将开始下载图片...')for each in pic_url:print('正在下载第' + str(num + 1) + '张图片,图片地址:' + str(each))try:if each is not None:pic = requests.get(each, timeout=7)else:continueexcept BaseException:print('错误,当前图片无法下载')continueelse:string = os.path.join(file, str(num) + '.jpg')fp = open(string, 'wb')fp.write(pic.content)fp.close()num += 1if num >= numPicture:returnif __name__ == '__main__':tm = int(input('请输入每类图片的下载数量 '))numPicture = tmline_list = []with open('name.txt', encoding='utf-8') as file:line_list = [k.strip() for k in file.readlines()]#import pdb;pdb.set_trace()for word in line_list:url = '=baiduimage&ie=utf-8&word=' + word + '&pn='tot = Find(url)Recommend = recommend(url)  # 记录相关推荐print('经过检测%s类图片共有%d张' % (word, tot))file = wordy = os.path.exists(file)if y == 1:print('该文件已存在,请重新输入')file = wordos.mkdir(file)else:os.mkdir(file)t = 0tmp = urlwhile t < numPicture:try:url = tmp + str(t)result = requests.get(url, timeout=10)print(url)except error.HTTPError as e:print('网络错误,请调整网络后重试')t = t + 60else:dowmloadPicture(result.text, word)t = t + 60numPicture = numPicture + tmprint('任务完成')

2. 运行代码:python3 spider.py

    会提示你输入的数量,结果如下:    

3. 结果新建了两个文件夹,刘亦菲和刘德华。

4. 文件夹里的内容如下:         

更多推荐

三分钟带你学会爬虫

本文发布于:2024-02-06 16:02:59,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1749922.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:爬虫   带你

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!