问题描述
限时送ChatGPT账号..我目前正在使用 r v. 1.0.44 和包 twitteR(最新版本)根据某些关键字抓取推文.具体我使用以下命令:
I'm currently scraping tweets based on certain keywords using r v. 1.0.44 and the package twitteR (newest version). Specifically I use the following command:
my_twitter_data <- searchTwitter("#aleppo", n = 40000, lang = "en", since = '2016-12-12', until = "2016-12-13", retryOnRateLimit = 120)
在请求关于#aleppo 的 40k 条推文(由于速率限制需要相当长的时间才能获得)中,只有 5k 的结果将是原始推文,即 strip_retweets(my_twitter_data, strip_manual=TRUE, strip_mt=TRUE)
将返回一个长度为 5k 的列表.
In a request for 40k tweets about #aleppo (which takes quite some time to get due to rate limitation) only 5k of the results will be original tweets, i.e. strip_retweets(my_twitter_data, strip_manual=TRUE, strip_mt=TRUE)
will return a list of length 5k.
我的问题是我花费了大量的时间限制,因此我在转发上花费了很多时间,这与我的进一步分析无关.我的问题是在 R 中有没有办法解决这个问题,所以我只在原始推文上花费我的速率限制?
My problem is that I spend a lot of my rate limit and therefore time on retweets which are irrelevant for my further analysis. My question is if there is a way around this problem in R so I only spend my rate limit on original tweets?
推荐答案
您可以将 -filter:retweets
添加到您的查询中:
You can add -filter:retweets
to your query:
my_twitter_data <- searchTwitter("#aleppo -filter:retweets", n = 40000,
lang = "en", since = '2016-12-12',
until = "2016-12-13", retryOnRateLimit = 120)
这篇关于在 r 中使用 twitteR 排除抓取转推的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
更多推荐
[db:关键词]
发布评论