python爬虫:BeautifulSoup巴乐兔租房信息爬取

编程入门 行业动态 更新时间:2024-10-08 19:39:01

python<a href=https://www.elefans.com/category/jswz/34/1770264.html style=爬虫:BeautifulSoup巴乐兔租房信息爬取"/>

python爬虫:BeautifulSoup巴乐兔租房信息爬取

找到巴乐兔上海租房网页url:巴乐兔上海
通过翻页发现:
第一页url:‘/?entrance=14’
第二页url:‘/?**seachId=0&is_rec_house=0&entrance=14&solr_house_cnt=28156’
第三页url:‘/?**seachId=0&is_rec_house=0&entrance=14&solr_house_cnt=28159’
我们只需要关注网址的前半部分的不同之处,后半部分为自动生成字段,因此可以用以下代码获取每页的url

if page == 1:url = '/?entrance=14'
else:url = ''+str(page)+'o1a1/'

再通过request请求网页,用BeautifulSoup解析:

r = requests.get(url,headers=headers)
soup = BeautifulSoup(r.content.decode('utf-8'),'html.parser')

通过检查网页发现,需要的信息全部在class为list-center的div标签下:

在该div下,每个li标签就是一条租房信息,因此

outer_div = soup.find('div',class_="list-center")
houses = outer_div.find_all('li',attrs = {"class":"listUnit-date clearfix PBA_list_house"})

同样,通过分析每个li标签下的子标签,可以得到房屋名称,区域,小区等信息,代码如下:

houses_info = []
for house in houses:try:id = house.attrs['num']name = house.h3.a.attrs['title']address = re.search("(.*?)-(.{2})",name)area = address.group(2)community = address.group(1)url = house.find('a',attrs={"target":"_blank"}).attrs['href']price = house.attrs['price']rent_type = house.attrs['variant']size_info = house.find('p',attrs={"class":"list-pic-ps"}).find("span",attrs={"class":False}).textsize = re.search("(\d+)",size_info).group()traffic_1 = house.find("div",attrs={"class":"list-pic-ad"}).texttraffic_2 = re.search("距离(?:(\d+)号线)?(.*?)(?:(\d+)米)",traffic_1)if traffic_2 == None:traffic_info = '__'else:traffic = traffic_2.groups()info_list = []for i in traffic:if i is None:info_list.append("")else:info_list.append(i)traffic_info = '_'.join(info_list)release_time = house.find("span",attrs={"class":"room-time"}).text.replace(" 发布","")grade = house.find("span",attrs={"class":"lan-ratedetail"}).textcomment = house.find("span",attrs={"class":"lan-rate-people"}).textcomment_num = re.search("(\d+)",comment).group()except Exception:Nonehouses_info.append({"id":id,"name":name,"area":area,"community":community,"url":url,"price":price,"rent_type":rent_type,"size":size,"traffic_info":traffic_info,"release_time":release_time,"grade":grade,"comment_num":comment_num})

将以上代码封装成方法,在定义一个保存成csv文件的方法:

def randerToFile(houses_info):with open("巴乐兔_上海.csv","a",encoding='utf-8') as file:for house in houses_info:file.write("::".join(house.values())+"\n")

最后通过循环,执行上述方法,就能爬取到巴乐兔所有页的租房信息。

更多推荐

python爬虫:BeautifulSoup巴乐兔租房信息爬取

本文发布于:2024-03-05 12:13:54,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1712267.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:爬虫   租房信息   python   巴乐兔   BeautifulSoup

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!