孔夫子旧书网的评论"/>
爬取孔夫子旧书网的评论
这次带来的是爬取孔夫子旧书网的书店评论,并写入txt文档和数据库
这个用到了json模块
json模块是格式转换模块,json是为了将爬取下来的评论网页解析成Python方便处理的字典格式。json有.load()和.loads(),dump()和dumps()这几种方法。
- loads()是将json格式转换成Python方便处理的字典格式。
- dumps()方法是将Python的字典格式转换成json格式。
还有就是评论信息在浏览器的检查元素的Network中,是 js或者 xhr格式
完整代码展示:
#-*- coding:utf-8 -*-
from bs4 import BeautifulSoup
import requests
import json
import MySQLdb
#解决出现的写入错误
import sys
reload(sys)
sys.setdefaultencoding('utf-8')print('连接到mysql服务器...')
conn = MySQLdb.connect(host='127.0.0.1', user='root', passwd='123mysql', db='onefive',charset='utf8')
print('连接上了!')
cur = conn.cursor()
sql = """CREATE TABLE comment(store CHAR(10),book CHAR(60),comment CHAR(100),time CHAR(20),reviewer CHAR(20))"""
cur.execute(sql)
connmit()
url1='/'
user_agent = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/64.0.3282.140 Safari/537.36 Edge/17.17134'
headers = {'User-Agent': user_agent}
html1 = requests.get(url1,headers=headers).content
soup = BeautifulSoup(html1,'html.parser')
article = soup.find('div',attrs={'class':'main-box'})
#书店名
store = article.find('div',attrs={'class':'shop_top_title'}).get_text()
#print title
f=open('comment2.txt','a')
f.write(store)
f.close()
cur.execute("INSERT INTO comment(store) VALUES ('%s');" % (store))
connmit()
#评论的相关信息
for page in range(1,3):url2 = '=1710684&itemId=&page=' + str(page) + '&needEmpty=0&rating=all'html2 = requests.get(url2, headers=headers).contentb = json.loads(html2)dic = b['result']for k in dic['reviewList']:book = k['itemName'].encode('UTF-8')print bookcomment = k['content'].encode('UTF-8')print commenttime = k['reviewTime'].encode('UTF-8')print timereviewer = k['appraiserNickname'].encode('UTF-8')print reviewerend = book + '\n' + comment + '\n' + time + '\n' + reviewer + '\n'f=open('comment2.txt','a')f.write(end)f.close()cur.execute("INSERT INTO comment(book,comment,time,reviewer) VALUES ('%s', '%s', '%s','%s');" % (book,comment,time,reviewer))connmit()
conn.close()
更多推荐
爬取孔夫子旧书网的评论
发布评论