本文介绍了使用 Beautiful Soup 抓取多个 URL的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我正在尝试从多个 URL 中提取特定的类.标签和类保持不变,但我需要我的 Python 程序在我输入链接时抓取所有内容.
I'm trying to extract specific classes from multiple URLs. The tags and classes stay the same but I need my python program to scrape all as I just input my link.
这是我的工作示例:
from bs4 import BeautifulSoup import requests import pprint import re import pyperclip url = input('insert URL here: ') #scrape elements response = requests.get(url) soup = BeautifulSoup(response.content, "html.parser") #print titles only h1 = soup.find("h1", class_= "class-headline") print(h1.get_text())这适用于单个 URL,但不适用于批处理.谢谢你帮助我.我从这个社区学到了很多东西.
This works for individual URLs but not for a batch. Thanks for helping me. I learned a lot from this community.
推荐答案有一个 url 列表并遍历它.
Have a list of urls and iterate through it.
from bs4 import BeautifulSoup import requests import pprint import re import pyperclip urls = ['www.website1', 'www.website2', 'www.website3', .....] #scrape elements for url in urls: response = requests.get(url) soup = BeautifulSoup(response.content, "html.parser") #print titles only h1 = soup.find("h1", class_= "class-headline") print(h1.get_text())如果您要提示用户为每个站点进行输入,则可以通过这种方式完成
If you are going to prompt user for input for each site then it can be done this way
from bs4 import BeautifulSoup import requests import pprint import re import pyperclip urls = ['www.website1', 'www.website2', 'www.website3', .....] #scrape elements msg = 'Enter Url, to exit type q and hit enter.' url = input(msg) while(url!='q'): response = requests.get(url) soup = BeautifulSoup(response.content, "html.parser") #print titles only h1 = soup.find("h1", class_= "class-headline") print(h1.get_text()) input(msg)更多推荐
使用 Beautiful Soup 抓取多个 URL
发布评论