Really fast word ngram vectorization in R -


edit: new package text2vec excellent, , solves problem (and many others) well.

text2vec on cran text2vec on github vignette illustrates ngram tokenization

i have pretty large text dataset in r, i've imported character vector:

#takes 15 seconds system.time({   set.seed(1)   samplefun <- function(n, x, collapse){     paste(sample(x, n, replace=true), collapse=collapse)   }   words <- sapply(rpois(10000, 3) + 1, samplefun, letters, '')   sents1 <- sapply(rpois(1000000, 5) + 1, samplefun, words, ' ') }) 

i can convert character data bag-of-words representation follows:

library(stringi) library(matrix) tokens <- stri_split_fixed(sents1, ' ') token_vector <- unlist(tokens) bagofwords <- unique(token_vector) n.ids <- sapply(tokens, length) <- rep(seq_along(n.ids), n.ids) j <- match(token_vector, bagofwords) m <- sparsematrix(i=i, j=j, x=1l) colnames(m) <- bagofwords 

so r can vectorize 1,000,000 million short sentences bag-of-words representation in 3 seconds (not bad!):

> m[1:3, 1:7] 10 x 7 sparse matrix of class "dgcmatrix"       fqt hqhkl sls lzo xrnh zkuqc mqh  [1,]   1     1   1   1    .     .   .  [2,]   .     .   .   .    1     1   1  [3,]   .     .   .   .    .     .   . 

i can throw sparse matrix glmnet or irlba , pretty awesome quantitative analysis of textual data. hooray!

now i'd extend analysis bag-of-ngrams matrix, rather bag-of-words matrix. far, fastest way i've found follows (all of ngram functions find on cran choked on dataset, i got little so):

find_ngrams <- function(dat, n, verbose=false){   library(pbapply)   stopifnot(is.list(dat))   stopifnot(is.numeric(n))   stopifnot(n>0)   if(n == 1) return(dat)   pblapply(dat, function(y) {     if(length(y)<=1) return(y)     c(y, unlist(lapply(2:n, function(n_i) {       if(n_i > length(y)) return(null)       do.call(paste, unname(as.data.frame(embed(rev(y), n_i), stringsasfactors=false)), quote=false)     })))   }) }  text_to_ngrams <- function(sents, n=2){   library(stringi)   library(matrix)   tokens <- stri_split_fixed(sents, ' ')   tokens <- find_ngrams(tokens, n=n, verbose=true)   token_vector <- unlist(tokens)   bagofwords <- unique(token_vector)   n.ids <- sapply(tokens, length)   <- rep(seq_along(n.ids), n.ids)   j <- match(token_vector, bagofwords)   m <- sparsematrix(i=i, j=j, x=1l)   colnames(m) <- bagofwords   return(m) }  test1 <- text_to_ngrams(sents1) 

this takes 150 seconds (not bad pure r function), i'd go faster , extend bigger datasets.

are there really fast functions in r n-gram vectorization of text? ideally i'm looking rcpp function takes character vector input, , returns sparse matrix of documents x ngrams output, happy have guidance writing rcpp function myself.

even faster version of find_ngrams function helpful, that's main bottleneck. r surprisingly fast @ tokenization.

edit 1 here's example dataset:

sents2 <- sapply(rpois(100000, 500) + 1, samplefun, words, ' ') 

in case, functions creating bag-of-words matrix take 30 seconds , functions creating bag-of-ngrams matrix take 500 seconds. again, existing n-gram vectorizers in r seem choke on dataset (though i'd love proven wrong!)

edit 2 timings vs tau:

zach_t1 <- system.time(zach_ng1 <- text_to_ngrams(sents1)) tau_t1 <- system.time(tau_ng1 <- tau::textcnt(as.list(sents1), n = 2l, method = "string", recursive = true)) tau_t1 / zach_t1 #1.598655  zach_t2 <- system.time(zach_ng2 <- text_to_ngrams(sents2)) tau_t2 <- system.time(tau_ng2 <- tau::textcnt(as.list(sents2), n = 2l, method = "string", recursive = true)) tau_t2 / zach_t2 #1.9295619 

this interesting problem, , 1 have spent lot of time grappling in quanteda package. involves 3 aspects comment on, although it's third addresses question. first 2 points explain why have focused on ngram creation function, since -- point out -- speed improvement can made.

  1. tokenization. here using string::str_split_fixed() on space character, fastest, not best method tokenizing. implemented same in quanteda::tokenize(x, = "fastest word"). it's not best because stringi can smarter implementations of whitespace delimiters. (even character class \\s smarter, slower -- implemented what = "fasterword"). question not tokenization though, point context.

  2. tabulating document-feature matrix. here use matrix package, , index documents , features (i call them features, not terms), , create sparse matrix directly in code above. use of match() lot faster match/merge methods using through data.table. going recode quanteda::dfm() function since method more elegant , faster. really, glad saw this!

  3. ngram creation. here think can in terms of performance. implement in quanteda through argument quanteda::tokenize(), called grams = c(1) value can integer set. our match unigrams , bigrams ngrams = 1:2, instance. can examine code @ https://github.com/kbenoit/quanteda/blob/master/r/tokenize.r, see internal function ngram(). i've reproduced below , made wrapper can directly compare find_ngrams() function.

code:

# wrapper find_ngrams2 <- function(x, ngrams = 1, concatenator = " ") {      if (sum(1:length(ngrams)) == sum(ngrams)) {         result <- lapply(x, ngram, n = length(ngrams), concatenator = concatenator, include.all = true)     } else {         result <- lapply(x, function(x) {             xnew <- c()             (n in ngrams)                  xnew <- c(xnew, ngram(x, n, concatenator = concatenator, include.all = false))             xnew         })     }     result }  # work ngram <- function(tokens, n = 2, concatenator = "_", include.all = false) {      if (length(tokens) < n)          return(null)      # start lower ngrams, or specified size if include.all = false     start <- ifelse(include.all,                      1,                      ifelse(length(tokens) < n, 1, n))      # set max size of ngram @ max length of tokens     end <- ifelse(length(tokens) < n, length(tokens), n)      all_ngrams <- c()     # outer loop ngrams down 1     (width in start:end) {         new_ngrams <- tokens[1:(length(tokens) - width + 1)]         # inner loop ngrams of width > 1         if (width > 1) {             (i in 1:(width - 1))                  new_ngrams <- paste(new_ngrams,                                      tokens[(i + 1):(length(tokens) - width + 1 + i)],                                      sep = concatenator)         }         # paste onto previous results , continue         all_ngrams <- c(all_ngrams, new_ngrams)     }      all_ngrams } 

here comparison simple text:

txt <- c("the quick brown fox named seamus jumps on lazy dog.",           "the dog brings newspaper boy named seamus.") tokens <- tokenize(tolower(txt), removepunct = true) tokens # [[1]] # [1] "the"    "quick"  "brown"  "fox"    "named"  "seamus" "jumps"  "over"   "the"    "lazy"   "dog"    #  # [[2]] # [1] "the"       "dog"       "brings"    "a"         "newspaper" "from"      "a"         "boy"       "named"     "seamus"    #  # attr(,"class") # [1] "tokenizedtexts" "list"       microbenchmark::microbenchmark(zach_ng <- find_ngrams(tokens, 2),                                ken_ng <- find_ngrams2(tokens, 1:2)) # unit: microseconds #                                expr     min       lq     mean   median       uq     max neval #   zach_ng <- find_ngrams(tokens, 2) 288.823 326.0925 433.5831 360.1815 542.9585 897.469   100 # ken_ng <- find_ngrams2(tokens, 1:2)  74.216  87.5150 130.0471 100.4610 146.3005 464.794   100  str(zach_ng) # list of 2 # $ : chr [1:21] "the" "quick" "brown" "fox" ... # $ : chr [1:19] "the" "dog" "brings" "a" ... str(ken_ng) # list of 2 # $ : chr [1:21] "the" "quick" "brown" "fox" ... # $ : chr [1:19] "the" "dog" "brings" "a" ... 

for large, simulated text, here comparison:

tokens <- stri_split_fixed(sents1, ' ') zach_ng1_t1 <- system.time(zach_ng1 <- find_ngrams(tokens, 2)) ken_ng1_t1 <- system.time(ken_ng1 <- find_ngrams2(tokens, 1:2)) zach_ng1_t1 #    user  system elapsed  # 230.176   5.243 246.389  ken_ng1_t1 #   user  system elapsed  # 58.264   1.405  62.889  

already improvement, i'd delighted if improved further. should able implement faster dfm() method quanteda can want through:

dfm(sents1, ngrams = 1:2, = "fastestword",     tolower = false, removepunct = false, removenumbers = false, removetwitter = true))  

(that works slower overall result, because way create final sparse matrix object faster - change soon.)


Comments

Popular posts from this blog

Fail to load namespace Spring Security http://www.springframework.org/security/tags -

sql - MySQL query optimization using coalesce -

unity3d - Unity local avoidance in user created world -