python3 [爬虫入门实战]爬虫之scrapy爬取游天下南京短租房存mongodb

编程入门 行业动态 更新时间:2024-10-25 10:23:34

python3 [<a href=https://www.elefans.com/category/jswz/34/1770264.html style=爬虫入门实战]爬虫之scrapy爬取游天下南京短租房存mongodb"/>

python3 [爬虫入门实战]爬虫之scrapy爬取游天下南京短租房存mongodb

总结:总的来说不是很难,只是提取的字段有些多。总共获取了一个120多个南京房租信息

1 爬取的item

# -*- coding: utf-8 -*-# Define here the models for your scraped items
#
# See documentation in:
# .htmlimport scrapyclass YoutxnanjinItem(scrapy.Item):# define the fields for your item here like:# name = scrapy.Field()# pass# 房源名称homeName = scrapy.Field()# 房源链接homeLine = scrapy.Field()# 房租单价homeSinglePrice = scrapy.Field()# 房租地址homeAddress = scrapy.Field()# 房租近期信息homeDetai = scrapy.Field()# 满七天价格homeSeven = scrapy.Field()# 满30天价格homeThirth = scrapy.Field()# 房东homePerson = scrapy.Field()# 房东头像homePersonImg = scrapy.Field()# 房东头像链接homePersonLink = scrapy.Field()# 房子大图homePicBg = scrapy.Field()# 房子大图链接homePicLink = scrapy.Field()# 品牌店铺信息# homePinPai = scrapy.Field()# 明星房东# homeStarrPerson = scrapy.Field()

我就问:是不是注释很详细,。

2 spider里面的内容

#encoding=utf8
import scrapy
from youtxNanJin.items import YoutxnanjinItemclass NanJinDefault(scrapy.Spider):name = 'youtx'allowed_domains = ['youtx']start_urls = ["{}".format(n) for n in range(0,6)]def parse(self, response):# print(response.body)node_list = response.xpath("//div[@class='duanzu houseList']/ul/li[@class='clearfix']")# print(node_list)for node in node_list:item = YoutxnanjinItem()homeName = node.xpath("./div[@class='houseInfo clearfix']/div[@class='house-tit clearfix']/h3/a/text()").extract()homeLink = node.xpath("./div[@class='houseInfo clearfix']/div[@class='house-tit clearfix']/h3/a/@href").extract()print(homeName)print(homeLink)# 单日价格homeSinglePrice = node.xpath("./div[@class='houseInfo clearfix']/div[@class='house-tit clearfix']/div[@class='house-price mt9']/span/span[@class='housePrice']/text()").extract()print(homeSinglePrice)# 获取房源地址homeAddress = node.xpath("./div[@class='houseInfo clearfix']/div[@class='houseInfo-left mt2']/p[@class='clearfix mt5']/text()").extract()# 房租信息homeDesc =node.xpath("./div[@class='houseInfo clearfix']/div[@class='houseInfo-left mt2']/p[@class='mt5']/text()").extract()homeDesc2 =node.xpath("./div[@class='houseInfo clearfix']/div[@class='houseInfo-left mt2']/p[@class='mt5']/span[2]/text()").extract()print(homeAddress)print(homeDesc)print(homeDesc2)# 满30天的信息homeThrty = node.xpath("./div[@class='houseInfo clearfix']/div[@class='house-tit clearfix']/div[@class='house-price mt9']/div[@class='mix12_5']/div[@class='discount']/div[@class='discount-price']/span//text()").extract()print(homeThrty)# 房东信息homePerson = node.xpath("./div[@class='houseInfo clearfix']/div[@class='agentInfo mt16']/p[1]/a/text()").extract()# 房东链接homePersonLink = node.xpath("./div[@class='houseInfo clearfix']/div[@class='agentInfo mt16']/p[1]/a/@href").extract()print(homePerson)print(homePersonLink)# 房源大图图片homeBigPic = node.xpath("./div[@class='house-img']/a[1]/img/@src").extract()homeBigPicLink = node.xpath("./div[@class='house-img']/a[1]/@href").extract()print(homeBigPic)print(homeBigPicLink)# 房东头像信息personPic = node.xpath("./div[@class='house-img']/a[2]/img/@src").extract()# 房东头像链接地址personPicLink = node.xpath("./div[@class='house-img']/a[2]/img/@href").extract()print(personPic)print(homePersonLink)item['homeName'] ="".join(homeName)item['homeLine'] ="".join(homeLink)item['homeSinglePrice'] ="".join(homeSinglePrice)item['homeAddress'] ="".join(homeAddress)item['homeDetai'] ="".join(homeDesc)+"".join(homeDesc2)# 这里的值暂时没有取出来item['homeSeven'] ="".join(homeThrty)item['homeThirth'] ="".join(homeThrty)item['homePerson'] ="".join(homePerson)item['homePersonImg'] ="".join(personPic)item['homePersonLink'] ="".join(homePersonLink)item['homePicBg'] ="".join(homeBigPic)item['homePicLink'] ="".join(homeBigPicLink)yield item

注意:里面xpath根据东西写的值比较长,可以提取出来,这里就暂时不进行提取了。

3 接下来是pipline管道流的地方

# -*- coding: utf-8 -*-# Define your item pipelines here
#
# Don't forget to add your pipeline to the ITEM_PIPELINES setting
# See: .html
import json
from scrapy.conf import settings
import pymongoclass YoutxnanjinPipeline(object):def process_item(self, item, spider):return itemclass YouTXMongo(object):def __init__(self):self.client = pymongo.MongoClient(host=settings['MONGO_HOST'], port=settings['MONGO_PORT'])self.db = self.client[settings['MONGO_DB']]self.post = self.db[settings['MONGO_COLL']]def process_item(self, item, spider):postItem = dict(item)self.post.insert(postItem)return item# 写入json文件
class JsonWritePipline(object):def __init__(self):self.file = open('游天下南京.json','w',encoding='utf-8')def process_item(self,item,spider):line  = json.dumps(dict(item),ensure_ascii=False)+"\n"self.file.write(line)return itemdef spider_closed(self,spider):self.file.close()

只要写了一次存数据方式,其他的跟着搬过来就行了,前提是不出毛病的情况下

4 settings里面的代码

这里主要是进行mongodb的配置,user-agent头信息的配置

# -*- coding: utf-8 -*-# Scrapy settings for youtxNanJin project
#
# For simplicity, this file contains only settings considered important or
# commonly used. You can find more settings consulting the documentation:
#
#     .html
#     .html
#     .htmlBOT_NAME = 'youtxNanJin'SPIDER_MODULES = ['youtxNanJin.spiders']
NEWSPIDER_MODULE = 'youtxNanJin.spiders'# Crawl responsibly by identifying yourself (and your website) on the user-agent
# USER_AGENT = 'youtxNanJin (+)'
USER_AGENT = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36'# Obey robots.txt rules
ROBOTSTXT_OBEY = False
# 配置mongoDB
MONGO_HOST = "127.0.0.1"  # 主机IP
MONGO_PORT = 27017  # 端口号
MONGO_DB = "YouTianXia"  # 库名
MONGO_COLL = "house_nanjin"  # collection
# Configure maximum concurrent requests performed by Scrapy (default: 16)
#CONCURRENT_REQUESTS = 32# Configure a delay for requests for the same website (default: 0)
# See .html#download-delay
# See also autothrottle settings and docs
#DOWNLOAD_DELAY = 3
# The download delay setting will honor only one of:
#CONCURRENT_REQUESTS_PER_DOMAIN = 16
#CONCURRENT_REQUESTS_PER_IP = 16# Disable cookies (enabled by default)
COOKIES_ENABLED = False# Disable Telnet Console (enabled by default)
#TELNETCONSOLE_ENABLED = False# Override the default request headers:
#DEFAULT_REQUEST_HEADERS = {
#   'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8',
#   'Accept-Language': 'en',
#}# Enable or disable spider middlewares
# See .html
#SPIDER_MIDDLEWARES = {
#    'youtxNanJin.middlewares.YoutxnanjinSpiderMiddleware': 543,
#}# Enable or disable downloader middlewares
# See .html
#DOWNLOADER_MIDDLEWARES = {
#    'youtxNanJin.middlewares.MyCustomDownloaderMiddleware': 543,
#}# Enable or disable extensions
# See .html
#EXTENSIONS = {
#    'scrapy.extensions.telnet.TelnetConsole': None,
#}# Configure item pipelines
# See .html
ITEM_PIPELINES = {# 'youtxNanJin.pipelines.YoutxnanjinPipeline': 300,'youtxNanJin.pipelines.YouTXMongo': 300,'youtxNanJin.pipelines.JsonWritePipline': 300,
}# Enable and configure the AutoThrottle extension (disabled by default)
# See .html
#AUTOTHROTTLE_ENABLED = True
# The initial download delay
#AUTOTHROTTLE_START_DELAY = 5
# The maximum download delay to be set in case of high latencies
#AUTOTHROTTLE_MAX_DELAY = 60
# The average number of requests Scrapy should be sending in parallel to
# each remote server
#AUTOTHROTTLE_TARGET_CONCURRENCY = 1.0
# Enable showing throttling stats for every response received:
#AUTOTHROTTLE_DEBUG = False# Enable and configure HTTP caching (disabled by default)
# See .html#httpcache-middleware-settings
#HTTPCACHE_ENABLED = True
#HTTPCACHE_EXPIRATION_SECS = 0
#HTTPCACHE_DIR = 'httpcache'
#HTTPCACHE_IGNORE_HTTP_CODES = []
#HTTPCACHE_STORAGE = 'scrapy.extensions.httpcache.FilesystemCacheStorage'

最后再来一张爬取下来的截图吧。相同的爬虫做着练习,以后要学会爬其他大一些的爬取不容易乱一些的网站。

更多推荐

python3 [爬虫入门实战]爬虫之scrapy爬取游天下南京短租房存mongodb

本文发布于:2024-02-27 05:59:57,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1705448.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:爬虫   南京   实战   入门   天下

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!