使用python绕过HTTP 403(Way around HTTP 403 with python)
我正在制作一个程序,使用谷歌搜索,但我不能因为HTTP错误403是否有任何方法围绕它或任何我使用机械化浏览这里是我的代码
from mechanize import Browser inp = raw_input("Enter Word: ") Word = inp SEARCH_PAGE = "https://www.google.com/" browser = Browser() browser.open( SEARCH_PAGE ) browser.select_form( nr=0 ) browser['q'] = Word browser.submit()这是错误消息
Traceback (most recent call last): File "C:\Python27\Project\Auth2.py", line 16, in <module> browser.submit() File "C:\Python27\lib\site-packages\mechanize\_mechanize.py", line 541, in submit return self.open(self.click(*args, **kwds)) File "C:\Python27\lib\site-packages\mechanize\_mechanize.py", line 203, in open return self._mech_open(url, data, timeout=timeout) File "C:\Python27\lib\site-packages\mechanize\_mechanize.py", line 255, in _mech_open raise response httperror_seek_wrapper: HTTP Error 403: request disallowed by robots.txt请帮助,谢谢
im makeing a program that uses google to search but i cant becuase of the HTTP error 403 is there any way around it or anything im using mechanize to browse here is my code
from mechanize import Browser inp = raw_input("Enter Word: ") Word = inp SEARCH_PAGE = "https://www.google.com/" browser = Browser() browser.open( SEARCH_PAGE ) browser.select_form( nr=0 ) browser['q'] = Word browser.submit()here is the error message
Traceback (most recent call last): File "C:\Python27\Project\Auth2.py", line 16, in <module> browser.submit() File "C:\Python27\lib\site-packages\mechanize\_mechanize.py", line 541, in submit return self.open(self.click(*args, **kwds)) File "C:\Python27\lib\site-packages\mechanize\_mechanize.py", line 203, in open return self._mech_open(url, data, timeout=timeout) File "C:\Python27\lib\site-packages\mechanize\_mechanize.py", line 255, in _mech_open raise response httperror_seek_wrapper: HTTP Error 403: request disallowed by robots.txtplease help and thank you
最满意答案
您可以告诉Mechanize忽略robots.txt文件:
browser.set_handle_robots(False)You can tell Mechanize to ignore the robots.txt file:
browser.set_handle_robots(False)更多推荐
发布评论