本文介绍了如何使用python解析xml feed?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在尝试解析此xml(www.reddit/r/videos/top/.rss),但遇到了麻烦.我试图在每个项目中保存youtube链接,但是由于频道"子节点而遇到麻烦.如何达到此级别,以便随后可以遍历所有项目?
I am trying to parse this xml (www.reddit/r/videos/top/.rss) and am having troubles doing so. I am trying to save the youtube links in each of the items, but am having trouble because of the "channel" child node. How do I get to this level so I can then iterate through the items?
#reddit parse reddit_file = urllib2.urlopen('www.reddit/r/videos/top/.rss') #convert to string: reddit_data = reddit_file.read() #close file because we dont need it anymore: reddit_file.close() #entire feed reddit_root = etree.fromstring(reddit_data) channel = reddit_root.findall('{purl/dc/elements/1.1/}channel') print channel reddit_feed=[] for entry in channel: #get description, url, and thumbnail desc = #not sure how to get this reddit_feed.append([desc])推荐答案
您可以尝试findall('channel/item')
import urllib2 from xml.etree import ElementTree as etree #reddit parse reddit_file = urllib2.urlopen('www.reddit/r/videos/top/.rss') #convert to string: reddit_data = reddit_file.read() print reddit_data #close file because we dont need it anymore: reddit_file.close() #entire feed reddit_root = etree.fromstring(reddit_data) item = reddit_root.findall('channel/item') print item reddit_feed=[] for entry in item: #get description, url, and thumbnail desc = entry.findtext('description') reddit_feed.append([desc])更多推荐
如何使用python解析xml feed?
发布评论