爬虫精进(二)"/>
爬虫精进(二)
一.下厨房爬最受欢迎菜谱实例
1.#####方法1 : 先去爬取所有的最小父级标签<div class="info pure-u">,然后针对每一个父级标签,想办法提取里面的菜名、URL、食材。
import requests
# 引用requests库
from bs4 import BeautifulSoup
# 引用BeautifulSoup库res_foods = requests.get('/')
# 获取数据
bs_foods = BeautifulSoup(res_foods.text,'html.parser')
# 解析数据
list_foods = bs_foods.find_all('div',class_='info pure-u')
# 查找最小父级标签list_all = []
# 创建一个空列表,用于存储信息for food in list_foods:tag_a = food.find('a')# 提取第0个父级标签中的<a>标签name = tag_a.text[17:-13]# 菜名,使用[17:-13]切掉了多余的信息URL = ''+tag_a['href']# 获取URLtag_p = food.find('p',class_='ing ellipsis')# 提取第0个父级标签中的<p>标签ingredients = tag_p.text[1:-1]# 食材,使用[1:-1]切掉了多余的信息list_all.append([name,URL,ingredients])# 将菜名、URL、食材,封装为列表,添加进list_allprint(list_all)
# 打印
2.方法二 : 分别提取所有的菜名、所有的URL、所有的食材。然后让菜名、URL、食材给一一对应起来(这并不复杂,第0个菜名,对应第0个URL,对应第0组食材,按顺序走即可)
import requests
# 引用requests库
from bs4 import BeautifulSoup
# 引用BeautifulSoup库res_foods = requests.get('/')
# 获取数据
bs_foods = BeautifulSoup(res_foods.text,'html.parser')
# 解析数据tag_name = bs_foods.find_all('p',class_='name')
# 查找包含菜名和URL的<p>标签
tag_ingredients = bs_foods.find_all('p',class_='ing ellipsis')
# 查找包含食材的<p>标签
list_all = []
# 创建一个空列表,用于存储信息
for x in range(len(tag_name)):
# 启动一个循环,次数等于菜名的数量list_food = [tag_name[x].text[18:-14],tag_name[x].find('a')['href'],tag_ingredients[x].text[1:-1]]# 提取信息,封装为列表。注意此处[18:-14]切片和之前不同,是因为此处使用的是<p>标签,而之前是<a>list_all.append(list_food)# 将信息添加进list_all
print(list_all)
# 打印
二.豆瓣爬前250电影
1.先爬取最小共同父级标签,然后针对每一个父级标签,提取里面的序号/电影名/评分/推荐语/链接。
import requests
from bs4 import BeautifulSoupUrl = '='
startIndex = 0
trail = '&filter='def GetNeedURL():return Url + str(startIndex)+trail;movies = []
for i in range(10):startIndex = i * 25res = requests.get(GetNeedURL())soup = BeautifulSoup(res.text,'html.parser')oneMovies = soup.find_all('div',class_ = 'item')for item in oneMovies:index = item.find('em').textname = item.find('img')['alt'];URL = item.find('div',class_ = 'hd').find('a')['href']grade = item.find('span',class_ = 'rating_num').textif item.find('p',class_ = 'quote'):Recommendations = item.find('p',class_ = 'quote').textRecommendation = Recommendations[1 : len(Recommendations)-1]movies.append([index,name,URL,grade,Recommendation])print(movies)
2.分别提取所有的序号/所有的电影名/所有的评分/所有的推荐语/所有的链接,然后再按顺序一一对应起来。
import requests
from bs4 import BeautifulSoupUrl = '='
startIndex = 0
trail = '&filter='def GetNeedURL():return Url + str(startIndex)+trail;movies = []
for i in range(10):startIndex = i * 25res = requests.get(GetNeedURL())soup = BeautifulSoup(res.text,'html.parser')movielist = soup.find('ol',class_ = 'grid_view')indexs = movielist.find_all('em')images = movielist.find_all('img')hd = movielist.find_all('div',class_ = 'hd')grades = movielist.find_all('span',class_ = 'rating_num')Recommendations = movielist.find_all('div',class_ = 'bd')for i in range(len(indexs)):if Recommendations[i].find('p',class_ = 'quote'):aa = Recommendations[i].find('p', class_='quote').textRecommendation = aa[1 : len(aa)-1]movies.append([indexs[i].text, images[i]['alt'], hd[i].find('a')['href'], grades[i].text, Recommendation])
更多推荐
爬虫精进(二)
发布评论