问题描述
限时送ChatGPT账号..我想将推文转换为用于机器学习的向量,以便我可以使用 Spark 的 K-Means 聚类基于内容对它们进行分类.例如,所有与亚马逊相关的推文都归入一个类别.
I'd like to turn tweets into vectors for machine learning, so that I can categorize them based on content using Spark's K-Means clustering. Ex, all tweets relating to Amazon get put into one category.
我曾尝试将推文拆分为单词并使用 HashingTF 创建向量,但效果不佳.
I have tried splitting the tweet into words and creating a vector using HashingTF, which wasn't very successful.
还有其他方法可以对推文进行矢量化处理吗?
Are there any other ways to vectorize tweets?
推荐答案
你可以试试这个管道:
首先,标记输入推文(位于 text
列中).基本上,它会创建一个新列 rawWords
作为从原始文本中提取的单词列表.为了得到这些词,它按字母数字词分割输入文本 (.setPattern("\\w+").setGaps(false)
)
First, tokenize the input Tweet (located in the column text
). basically, it creates a new column rawWords
as a list of words taken from the original text. To get these words, it splits the input text by alphanumeric words (.setPattern("\\w+").setGaps(false)
)
val tokenizer = new RegexTokenizer()
.setInputCol("text")
.setOutputCol("rawWords")
.setPattern("\\w+")
.setGaps(false)
其次,您可以考虑去除停用词以去除文本中不太重要的词,例如a、the、of,等
Secondly, you may consider remove the stop words to remove less significant words in the text, such as a, the, of, etc.
val stopWordsRemover = new StopWordsRemover()
.setInputCol("rawWords")
.setOutputCol("words")
现在是对 words
列进行矢量化的时候了.在这个例子中,我使用的是非常基本的 CountVectorizer
.还有许多其他的,例如 TF-ID Vectorizer
.您可以在此处找到更多信息.
Now it's time to vectorize the words
column. In this example I'm using the CountVectorizer
which is quite basic. There are many others such as the TF-ID Vectorizer
. You can find more information here.
我已经配置了 CountVectorizer
以便它创建一个包含 10,000 个单词的词汇表,每个单词在所有文档中至少出现 5 次,在每个文档中至少出现 1 次.
I've configured the CountVectorizer
so that it creates a vocabulary with 10,000 words, each word appearing a minimum of 5 times across all document, and a minimum of 1 on each document.
val countVectorizer = new CountVectorizer()
.setInputCol("words")
.setOutputCol("features")
.setVocabSize(10000)
.setMinDF(5.0)
.setMinTF(1.0)
最后,只需创建管道,并通过传递数据集来拟合和转换管道生成的模型.
Finally, just create the pipeline, and fit and transform the model generated by the pipeline by passing the dataset.
val transformPipeline = new Pipeline()
.setStages(Array(
tokenizer,
stopWordsRemover,
countVectorizer))
transformPipeline.fit(training).transform(test)
希望有帮助.
这篇关于如何使用 Spark 的 MLLib 对推文进行矢量化处理?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!
更多推荐
[db:关键词]
发布评论