用硒刮下一页问题

编程入门 行业动态 更新时间:2024-10-27 00:30:27
本文介绍了用硒刮下一页问题的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧! 问题描述

我正在尝试在 google 上抓取基本信息.我正在使用的代码如下.不幸的是它没有移动到下一页,我不知道原因.我使用 selenium 和 google chrome 作为浏览器(没有 firefox).你能告诉我我的代码有什么问题吗?

I am trying to scrape basic information on google. The code that I am using is the following. Unfortunately it does not move to the next page and I am not figuring the reason why. I am using selenium and google chrome as browser (no firefox). Could you please tell me what is wrong in my code?

driver.get('www.google/advanced_search?q=google&tbs=cdr:1,cd_min:3/4/2020,cd_max:3/4/2020&hl=en') search = driver.find_element_by_name('q') search.send_keys('tea') search.submit() soup = BeautifulSoup(driver.page_source,'lxml') result_div = soup.find_all('div', attrs={'class': 'g'}) titles = [] while True: next_page_btn =driver.find_elements_by_xpath("//a[@id='pnnext']") for r in result_div: if len(next_page_btn) <1: print("no more pages left") break else: try: title = None title = r.find('h3') if isinstance(title,Tag): title = title.get_text() print(title) if title != '' : titles.append(title) except: continue element =WebDriverWait(driver,5).until(expected_conditions.element_to_be_clickable((By.ID,'pnnext'))) driver.execute_script("return arguments[0].scrollIntoView();", element) element.click()

推荐答案

我将查询字符串中的 q 设置为空字符串.使用 as_q 而不是 q 作为搜索框名称.并稍微重新排序了您的代码.我设置了页面限制以阻止它永远持续下去.

I set q in the query string to be an empty string. Used as_q not q for the search box name. And reordered your code a bit. I put a page limit in to stop it going on forever.

from selenium import webdriver from bs4 import BeautifulSoup from selenium.webdrivermon.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions driver = webdriver.Chrome() driver.get('www.google/advanced_search?q=&tbs=cdr:1,cd_min:3/4/2020,cd_max:3/4/2020&hl=en') search = driver.find_element_by_name('as_q') search.send_keys('tea') search.submit() titles = [] page_limit = 5 page = 0 while True: soup = BeautifulSoup(driver.page_source, 'lxml') result_div = soup.find_all('div', attrs={'class': 'g'}) for r in result_div: for title in r.find_all('h3'): title = title.get_text() print(title) titles.append(title) next_page_btn = driver.find_elements_by_id('pnnext') if len(next_page_btn) == 0 or page > page_limit: break element = WebDriverWait(driver, 5).until(expected_conditions.element_to_be_clickable((By.ID, 'pnnext'))) driver.execute_script("return arguments[0].scrollIntoView();", element) element.click() page = page + 1 driver.quit()

更多推荐

用硒刮下一页问题

本文发布于:2023-06-13 05:22:03,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/674857.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:下一页   用硒刮

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!