Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Before reading this post, I very recommend to read:
- Orignal GloVe paper
- Jon Gauthier’s post, which provides detailed explanation of python implementation. This post helps me a lot with C++ implementation.
Word embedding
After Tomas Mikolov et al. released word2vec tool, there was a boom of articles about words vector representations. One of the greatest is GloVe, which did a big thing while explaining how such algorithms work and refolmulating word2vec optimizations as special kind of factoriazation for word cooccurences matrix.
This post will consists of two main parts:
- Very brief introduction into GloVe algorithm.
- Details of implementation. I will show how to write fast, parallel asynchronous SGD with RcppParallel with adaptive learning rate in C++ using Intel TBB and RcppParallel.
Introduction to GloVe algorithm
GloVe algorithm consists of following steps:
- Collect word cooccurence statistics in a form of word coocurence matrix . Each element of such matrix represents measure of how often word i appears in context of word j. Usually we scan our corpus in followinf manner: for each term we look for context terms withing some area – window_size before and window_size after. Also we give less weight for more distand words (usually ).
- Define soft constraint for each word pair: . Here – vector for main word, – vector for context word, , – scalar biases for main and context words.
- Define cost function . Here is a weighting function which help us to prevent learning only from exremly common word pairs. GloVe authors choose following fucntion:
Implementation
Main challenges I faced during implementation:
- Efficient cooccurence matrix creation.
- Implementation of efficient SGD for objective cost function minimization.
Cooccurence matrix creation
There are a few main issues with term cooccurence matrix (tcm):
- tcm should be sparse. We should be able to construct tcm for large vocabularies ( > 100k words).
- Fast lookups/inserts.
To meet requirement of sparsity we need to store data in associative array. unordered_map
is good candidate because of lookups/inserts complexity. I ended with std::unordered_map< std::pair<uint32_t, uint32_t>, T >
as container for sparse matrix in triplet form. Performance of unordered_map
heavily depends on underlying hash fucntion. Fortunately, we can pack pair<uint32_t, uint32_t>
into single uint64_t
in a determenistic way without any collisions.
Hash function for std::pair<uint32_t, uint32_t>
will look like:
namespace std { template <> struct hash<std::pair<uint32_t, uint32_t>> { inline uint64_t operator()(const std::pair<uint32_t, uint32_t>& k) const { return (uint64_t) k.first << 32 | k.second; } }; }
For details see this and this stackoverflow questions.
Also note, that our cooccurence matrix is symmetrical, so internally we will store only elements above main diagonal.
Stochastic gradient descent
Now we should implement efficient parallel asynchronous stochastic gradient descent for word cooccurence matrix factorization, which is proposed in GloVe paper. Interesting thing – SGD inherently is serial algoritms, but when your problem is sparse, you can do asynchronous updates without any locks and achieve speedup proportional to number of cores on your machine! If you didn’t read HOGWILD!, I recomment to do that.
Let me remind formulation of SGD. We try to move parameters in a minimizing direction, given by with a learning rate :
So, we have to calculate gradients for our cost function:
:
AdaGrad
We will use modification of SGD – AdaGrad algoritm. It automaticaly determines per-feature learning rate by tracking historical gradients, so that frequently occurring features in the gradients get small learning rates and infrequent features get higher ones. For AdaGrad implementation deteails see excellents Notes on AdaGrad by Chris Dyer.
Formulation of AdaGrad step and feature is following:
As we can see, at each iteration we need to keep track of sum over all historical gradients.
Parallel asynchronous AdaGrad
Actually we will use modification of AdaGrad – HOGWILD-style asynchronous AdaGrad 🙂 Main idea of HOGWILD! algorithm is very simple – don’t use any syncronizations. If your problem is sparse, allow threads to overwrite each other! This works and works fine. Again, see HOGWILD! paper for details and theoretical proof.
Code
Now lets put all into the code.
As seen from analysis above, GloveFit
class should consists following parameters:
- word vecvors
w_i
,w_j
(for main and context words). - biases
b_i
,b_j
. - word vectors square gradients
grad_sq_w_i
,grad_sq_w_j
for adaptive learning rates. - word biases square gradients
grad_sq_b_i
,grad_sq_b_j
for adaptive learning rates. lerning_rate
,max_cost
and other scalar model parameters.
class GloveFit { private: size_t vocab_size, word_vec_size; double x_max, learning_rate; // see https://github.com/maciejkula/glove-python/pull/9#issuecomment-68058795 // clips the cost for numerical stability double max_cost; // initial learning rate double alpha; // word vecrtors vector<vector<double> > w_i, w_j; // word biases vector<double> b_i, b_j; // word vectors square gradients vector<vector<double> > grad_sq_w_i, grad_sq_w_j; // word biases square gradients vector<double> grad_sq_b_i, grad_sq_b_j; }
Single iteration
Now we should to initialize parameters and perform iteration of SGD:
//init cost global_cost = 0 // assume tcm is sparse matrix in triplet form - <i, j, x> for_each (<i, j, x> ) { //compute cost function and add it to global cost global_cost += J(x) //Compute gradients for bias terms and perform adaptive updates for bias terms //Compute gradients for word vector terms and perform adaptive updates for word vectors //Update squared gradient sums for word vectors //Update squared gradient sums for biases } return global_cost;
For actual text2vec code (with a few tricks) check this loop.
OpenMP
As discussed above, all these steps can be performed in parallel loop (over all non-zero word-coocurence scores). This can be easily done via OpenMP parallel for
and reduction: #pragma omp parallel for reduction(+:global_cost)
. But there is one significant issue with this approach – it is very hard to make portable R-package with OpenMP support. By default it will work only on linux distributions, because:
- default
clang
on OS X don’t support OpenMP (of course you can installclang-omp
orgcc
from brew, but this also could be tricky). - Rtools begins support of OpenMP on Windows only in 2015. But even modern realization has substantial overheads.
For more details see OpenMP-support section of Writing R Extensions manual.
Intel TBB
Luckily we have a better alternative – Intel Thread Building Blocks library and RcppParallel package which provides RVector
and RMatrix
wrapper classes for safe and convenient access to R data structures in a multi-threaded environment! Moreover it “just works” on main platforms – OS X, Windows, Linux. Have very positive experience with this library, thanks to Rcpp Core team and especially to JJ Allaire.
Using TBB is little bit trickier, then writing simple OpenMP #pragma
directives. You should implement functor which operates on a chunk of data and call parallelReduce
or parallelFor
on entire data collection. You can find useful (and simple) examples at RcppParallel examples section.
Putting all together
For now suppose, we have partial_fit
method in GloveFit
class with following signature (see actual code here):
double partial_fit( size_t begin, size_t end, const RVector<int> &x_irow, const RVector<int> &x_icol, const RVector<double> &x_val);
It takes
- tcm in sparse triplet form
<x_irow, x_icol, x_val>
begin
andend
pointers for a range on which we want to perform our SDG.
And performs SGD steps over this range – updates word vectors, gradients, etc. At the end it retruns value of accumulated cost function. Note, that internally this method modifies values members of the class.
Also note, that signature of partial_fit
is very similar to what we have to implement in our TBB functor. Now we are ready to write it:
struct AdaGradIter : public Worker { RVector<int> x_irow, x_icol; RVector<double> x_val; GloveFit &fit; int vocab_size, word_vec_size, num_iters, x_max; double learning_rate; // accumulated value double global_cost; // function to set global_cost = 0 between iterations void set_cost_zero() { global_cost = 0; }; //init function to use between iterations void init(const IntegerVector &x_irowR, const IntegerVector &x_icolR, const NumericVector &x_valR, const IntegerVector &iter_orderR, GloveFit &fit) { x_irow = RVector<int>(x_irowR); x_icol = RVector<int>(x_icolR); x_val = RVector<double> (x_valR); iter_order = RVector<int> (iter_orderR); fit = fit; global_cost = 0; } // dummy constructor // used at first initialization of GloveFitter AdaGradIter(GloveFit &fit): x_irow(IntegerVector(0)), x_icol(IntegerVector(0)), x_val(NumericVector(0)), iter_order(IntegerVector(0)), fit(fit) {}; // constructors AdaGradIter(const IntegerVector &x_irowR, const IntegerVector &x_icolR, const NumericVector &x_valR, GloveFit &fit): x_irow(x_irowR), x_icol(x_icolR), x_val(x_valR), fit(fit), global_cost(0) {} // constructor callesd at split AdaGradIter(const AdaGradIter& AdaGradIter, Split): x_irow(AdaGradIter.x_irow), x_icol(AdaGradIter.x_icol), x_val(AdaGradIter.x_val), fit(AdaGradIter.fit), global_cost(0) {} // process just the elements of the range void operator()(size_t begin, size_t end) { global_cost += this->fit.partial_fit(begin, end, x_irow, x_icol, x_val); } // join my value with that of another global_cost void join(const AdaGradIter& rhs) { global_cost += rhs.global_cost; } };
As you can see, it is very similar to example form RcppParallel site. One diffrence – it has side-effects. By calling partial_fit
it modifies internal state of the input instance of GloveFit
class (which actually contains our GloVe model).
Now lets write GloveFitter
class, which will be callable from R via Rcpp-modules
. It will act as interface for fitting our model and take all input model parameters such as vocabulary size, desired word vectors size, initial AdaGrad learning rate, etc. Also we want to track cost between iterations and want to be able to perform some early stopping strategy between SGD iterations. For that purpose we keep our model in C++ class, so we can modify it “in place” at each SGD iteration (which can be problematic in R)
class GloveFitter { public: GloveFitter(size_t vocab_size, size_t word_vec_size, uint32_t x_max, double learning_rate, double max_cost, double alpha): gloveFit(vocab_size, word_vec_size, learning_rate, x_max, max_cost, alpha), adaGradIter(gloveFit) {} // function to set cost to zero from R (used between SGD iterations) void set_cost_zero() {adaGradIter.set_cost_zero();}; double fit_chunk(const IntegerVector x_irow, const IntegerVector x_icol, const NumericVector x_val, const IntegerVector iter_order) { this->adaGradIter.init(x_irow, x_icol, x_val, iter_order, gloveFit); // parallelReduce(0, x_irow.size(), adaGradIter); return (this->adaGradIter.global_cost); } // export word vectors to R List get_word_vectors() { return List::create(_["word_vectors"] = adaGradIter.fit.get_word_vectors()); } private: GloveFit gloveFit; AdaGradIter adaGradIter; };
And create wrapper with Rcpp-Modules
:
RCPP_MODULE(GloveFitter) { class_< GloveFitter >( "GloveFitter" ) //<vocab_size, word_vec_size, x_max, learning_rate, grain_size, max_cost, alpha> .constructor<size_t, size_t, uint32_t, double, uint32_t, double, double>() .method( "get_word_vectors", &GloveFitter::get_word_vectors, "returns word vectors") .method( "set_cost_zero", &GloveFitter::set_cost_zero, "sets cost to zero") .method( "fit_chunk", &GloveFitter::fit_chunk, "process TCM data chunk") ; }
Now we can use GloveFitter
class from R:
fit <- new( GloveFitter, vocabulary_size, word_vectors_size, x_max, learning_rate, grain_size, max_cost, alpha) NUM_ITER <- 10 for(i in seq_len(NUM_ITER)) { cost <- fit$fit_chunk(tcm@i, tcm@j, tcm@x, iter_order) print(cost) fit$set_cost_zero() }
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.