我正在寻找做一些v.simple数据挖掘(频率,二元,三元组)在一些facebook的波斯语帖子,我已经收集和归档在csv。下面是我将使用facebook评论的英语csv的脚本,将所有单独的单词解套到它们自己的列中。
stp_tidy <- stc2 %>%
filter(!str_detect(Message, "^RT")) %>%
mutate(text = str_replace_all(Message, "https://t.co/[A-Za-z\\d]+|http://[A-Za-z\\d]+|&|<|>|RT","")) %>%
unnest_tokens(word, text, token = "regex", pattern = reg_words) %>%
filter(!word %in% stop_words$word,
str_detect(word, "[a-z]"))有没有人知道在波斯语(具体地说是达里语)中应用unnest_tokens的方法?
发布于 2020-01-06 21:46:25
2个选项。第一个例子是使用quanteda,第二个例子是使用udpipe.
请注意,用波斯语打印tibbles很奇怪,也就是说,特征和值往往打印在错误的列中,但数据正确地存储在对象中,以便进一步处理。这两个选项的输出略有不同。但这些往往可以忽略不计。注意,为了读入数据,我使用了readtext包。这往往与quanteda配合得很好。
1个quanteda
library(quanteda)
library(readtext)
# library(stopwords)
stp_test <- readtext("stp_test.csv", encoding = "UTF-8")
stp_test$Message[stp_test$Message != ""]
stp_test$text[stp_test$text != ""]
# remove records with empty messages
stp_test <- stp_test[stp_test$Message != "", ]
stp_corp <- corpus(stp_test,
docid_field = "doc_id",
text_field = "Message")
stp_toks <- tokens(stp_corp, remove_punct = TRUE)
stp_toks <- tokens_remove(stp_toks, stopwords::stopwords(language = "fa", source = "stopwords-iso"))
# step for creating ngrams 1-3 can be done here, after removing stopwords.
# stp_ngrams <- tokens_ngrams(stp_toks, n = 1L:3L, concatenator = "_")
stp_dfm <- dfm(stp_toks)
textstat_frequency(stp_dfm)
# transform into tidy data.frame
library(dplyr)
library(tidyr)
quanteda_tidy_out <- convert(stp_dfm, to = "data.frame") %>%
pivot_longer(-document, names_to = "features")2个udpipe
library(udpipe)
model <- udpipe_download_model(language = "persian-seraji")
ud_farsi <- udpipe_load_model(model$file_model)
# use stp_test from quanteda example.
x <- udpipe_annotate(ud_farsi, doc_id = stp_test$doc_id, stp_test$Message)
stp_df <- as.data.frame(x)
# selecting only nouns and verbs and removing stopwords
ud_tidy_out <- stp_df %>%
filter(upos %in% c("NOUN", "VERB"),
!token %in% stopwords::stopwords(language = "fa", source = "stopwords-iso")) 这两个包都有很好的小插曲和支持页面。
https://stackoverflow.com/questions/59602847
复制相似问题