我正在使用来自 NLTK的Python中的标记生成器.
已经有很多答案可以消除论坛上的标点符号.但是,它们都不能同时解决以下所有问题:
There are whole bunch of answers for removing punctuations on the forum already. However, none of them address all of the following issues together:
有解决这两个问题的优雅方法吗?
Is there an elegant way of solving both problems?
推荐答案如果要一次性对字符串进行标记化,我认为唯一的选择就是使用nltk.tokenize.RegexpTokenizer.通过以下方法,您可以在完全删除标点符号之前,使用标点符号作为标记来删除字母字符(如您的第三个要求中所述).换句话说,这种方法将在剥离所有标点符号之前将*u*删除.
If you want to tokenize your string all in one shot, I think your only choice will be to use nltk.tokenize.RegexpTokenizer. The following approach will allow you to use punctuation as a marker to remove characters of the alphabet (as noted in your third requirement) before removing the punctuation altogether. In other words, this approach will remove *u* before stripping all punctuation.
那么,解决此问题的一种方法是对空白进行标记化:
One way to go about this, then, is to tokenize on gaps like so:
>>> from nltk.tokenize import RegexpTokenizer >>> s = '''He said,"that's it." *u* Hello, World.''' >>> toker = RegexpTokenizer(r'((?<=[^\w\s])\w(?=[^\w\s])|(\W))+', gaps=True) >>> toker.tokenize(s) ['He', 'said', 'that', 's', 'it', 'Hello', 'World'] # omits *u* per your third requirement这应满足您在上面指定的所有三个条件.但是请注意,该令牌生成器不会返回诸如"A"之类的令牌.此外,我只对以和开头并以标点符号结尾的单个字母进行标记.否则,开始".不会返回令牌.您可能需要以其他方式细化正则表达式,具体取决于数据的外观和期望.
This should meet all three of the criteria you specified above. Note, however, that this tokenizer will not return tokens such as "A". Furthermore, I only tokenize on single letters that begin and end with punctuation. Otherwise, "Go." would not return a token. You may need to nuance the regex in other ways, depending on what your data looks like and what your expectations are.
更多推荐
如何删除标点符号?
发布评论