Reduce Memory Use for Large Datasets
[This article was first published on RTextTools: a machine learning library for text classification - Blog, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
One key limiting factor for automated text classification is memory consumption. As you accumulate more news articles, bills, and legal opinions, the term-document matrices used to represent the data grow quickly. RTextTools provides two algorithms, support vector machines and maximum entropy, that can handle large datasets with very little memory. Luckily, these two algorithms tend to be the most accurate as well. However, some applications require an ensemble of more than two algorithms to get an accurate scoring of topic codes.
First, you can try reducing the number of terms in your matrix. The create_matrix() function provides many features that can help remove noise from your dataset. There are the defaults- removing stopwords, removing punctuation, making words lowercase, and stripping whitespace- but also some other helpful tools. You can set minimum word length (e.g. minWordLength=5), select the N most frequent terms from each document (e.g. selectFreqTerms=25), setting a minimum word frequency per document (e.g. minDocFreq=3), and remove terms with a sparse factor of less than N (e.g. removeSparseTerms=0.9998).
These options can help you reduce the size of your document matrix, but they also can remove some information that may be valuable for the learning algorithms. If you just need the resources to run a huge dataset, and nothing above helps, you should look into setting up an Amazon EC2 instance with RStudio installed. We plan to create a simple way of doing this in the near future, but you’ll have to brave the stormy waters for now. Be warned, this option is for experienced users only!
First, you can try reducing the number of terms in your matrix. The create_matrix() function provides many features that can help remove noise from your dataset. There are the defaults- removing stopwords, removing punctuation, making words lowercase, and stripping whitespace- but also some other helpful tools. You can set minimum word length (e.g. minWordLength=5), select the N most frequent terms from each document (e.g. selectFreqTerms=25), setting a minimum word frequency per document (e.g. minDocFreq=3), and remove terms with a sparse factor of less than N (e.g. removeSparseTerms=0.9998).
These options can help you reduce the size of your document matrix, but they also can remove some information that may be valuable for the learning algorithms. If you just need the resources to run a huge dataset, and nothing above helps, you should look into setting up an Amazon EC2 instance with RStudio installed. We plan to create a simple way of doing this in the near future, but you’ll have to brave the stormy waters for now. Be warned, this option is for experienced users only!
To leave a comment for the author, please follow the link and comment on their blog: RTextTools: a machine learning library for text classification - Blog.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.