Snowball Stemmer 只词干最后一个词

编程入门 行业动态 更新时间:2024-10-25 04:18:29
本文介绍了Snowball Stemmer 只词干最后一个词的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

限时送ChatGPT账号..

我想使用 R 中的 tm 包对纯文本文档语料库中的文档进行词干.当我将 SnowballStemmer 函数应用于语料库的所有文档时,仅对每个文档的最后一个词进行词干.

I want to stem the documents in a Corpus of plain text documents using the tm package in R. When I apply the SnowballStemmer function to all documents of the corpus, only the last word of each document is stemmed.

library(tm)
library(Snowball)
library(RWeka)
library(rJava)
path <- c("C:/path/to/diretory")
corp <- Corpus(DirSource(path),
               readerControl = list(reader = readPlain, language = "en_US",
                                    load = TRUE))
tm_map(corp,SnowballStemmer) #stemDocument has the same problem

我认为这与将文档读入语料库的方式有关.用一些简单的例子来说明这一点:

I think it is related to the way the documents are read into the corpus. To illustrate this with some simple examples:

> vec<-c("running runner runs","happyness happies")
> stemDocument(vec) 
   [1] "running runner run" "happyness happi" 

> vec2<-c("running","runner","runs","happyness","happies")
> stemDocument(vec2)
   [1] "run"    "runner" "run"    "happy"  "happi" <- 

> corp<-Corpus(VectorSource(vec))
> corp<-tm_map(corp, stemDocument)
> inspect(corp)
   A corpus with 2 text documents

   The metadata consists of 2 tag-value pairs and a data frame
   Available tags are:
     create_date creator 
   Available variables in the data frame are:
     MetaID 

   [[1]]
   run runner run

   [[2]]
   happy happi

> corp2<-Corpus(DirSource(path),readerControl=list(reader=readPlain,language="en_US" ,  load=T))
> corp2<-tm_map(corp2, stemDocument)
> inspect(corp2)
   A corpus with 2 text documents

   The metadata consists of 2 tag-value pairs and a data frame
     Available tags are:
     create_date creator 
   Available variables in the data frame are:
     MetaID 

   $`1.txt`
   running runner runs

   $`2.txt`
   happyness happies

推荐答案

加载所需的库

library(tm)
library(Snowball)

创建向量

vec<-c("running runner runs","happyness happies")

从向量创建语料库

vec<-Corpus(VectorSource(vec))

非常重要的是检查我们语料库的类并保存它,因为我们想要一个 R 函数理解的标准语料库

very important thing is to check class of our corpus and preserve it as we want a standard corpus that R functions understand

class(vec[[1]])

vec[[1]]
<<PlainTextDocument (metadata: 7)>>
running runner runs

这可能会告诉你纯文本文档

this will probably tell you Plain text document

所以现在我们修改错误的stemDocument 函数.首先,我们将纯文本转换为字符,然后拆分文本,应用现在工作正常的 stemDocument 并将其粘贴回一起.最重要的是,我们将输出重新转换为 tm 包给出的 PlainTextDocument.

So now we modify our faulty stemDocument function. first we convert our plain text to character and then we split out text, apply stemDocument which works fine now and paste it back together. most importantly we reconvert output to PlainTextDocument given by tm package.

stemDocumentfix <- function(x)
{
    PlainTextDocument(paste(stemDocument(unlist(strsplit(as.character(x), " "))),collapse=' '))
}

现在我们可以在我们的语料库上使用标准的 tm_map

now we can use standard tm_map on our corpus

vec1 = tm_map(vec, stemDocumentfix)

结果是

vec1[[1]]
<<PlainTextDocument (metadata: 7)>>
run runner run

您需要记住的最重要的事情是始终保留语料库中的文档类别.我希望这是使用加载的 2 个库中的函数来解决您的问题的简化解决方案.

most important thing you need remember is to presever class of documents in corpus always. i hope this is a simplified solution to your problem using function from within the 2 libraries loaded.

这篇关于Snowball Stemmer 只词干最后一个词的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持IT屋!

更多推荐

[db:关键词]

本文发布于:2023-04-30 05:19:04,感谢您对本站的认可!
本文链接:https://www.elefans.com/category/jswz/34/1390131.html
版权声明:本站内容均来自互联网,仅供演示用,请勿用于商业和其他非法用途。如果侵犯了您的权益请与我们联系,我们将在24小时内删除。
本文标签:词干   Snowball   Stemmer

发布评论

评论列表 (有 0 条评论)
草根站长

>www.elefans.com

编程频道|电子爱好者 - 技术资讯及电子产品介绍!