Classification from scratch, SVM 7/8
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Seventh post of our series on classification from scratch. The latest one was on the neural nets, and today, we will discuss SVM, support vector machines.
A formal introduction
Here y takes values in {−1,+1}. Our model will be m(x)=sign[ωTx+b] Thus, the space is divided by a (linear) borderΔ:{x∈Rp:ωTx+b=0}
The distance from point xi to Δ is d(xi,Δ)=ωTxi+b‖ω‖If the space is linearly separable, the problem is ill posed (there is an infinite number of solutions). So consider
maxω,b{mini=1,⋯,n{distance(xi,Δ)}}
The strategy is to maximize the margin. One can prove that we want to solve maxω,m{m‖ω‖}
subject to yi⋅(ωTxi)=m, ∀i=1,⋯,n. Again, the problem is ill posed (non identifiable), and we can consider m=1: maxω{1‖ω‖}
subject to yi⋅(ωTxi)=1, ∀i=1,⋯,n. The optimization objective can be writtenminω{‖ω‖2}
The primal problem
In the separable case, consider the following primal problem,minw∈Rd,b∈R{12‖ω‖2}subject to yi⋅(ωTxi+b)≥1, ∀i=1,⋯,n.
In the non-separable case, introduce slack (error) variables ξ : if yi⋅(ωTxi+b)≥1, there is no error ξi=0.
Let C denote the cost of misclassification. The optimization problem becomesminw∈Rd,b∈R,ξ∈Rn{12‖ω‖2+C∑ni=1ξi}subject to yi⋅(ωTxi+b)≥1−ξi, with ξi≥0, ∀i=1,⋯,n.
Let us try to code this optimization problem. The dataset is here
n = length(myocarde[,"PRONO"]) myocarde0 = myocarde myocarde0$PRONO = myocarde$PRONO*2-1 C = .5
and we have to set a value for the cost C. In the (linearly) constrained optimization function in R, we need to provide the objective function f(θ) and the gradient ∇f(θ).
f = function(param){ w = param[1:7] b = param[8] xi = param[8+1:nrow(myocarde)] .5*sum(w^2) + C*sum(xi)} grad_f = function(param){ w = param[1:7] b = param[8] xi = param[8+1:nrow(myocarde)] c(2*w,0,rep(C,length(xi)))}
and (linear) constraints are written as Uθ−c≥0
U = rbind(cbind(myocarde0[,"PRONO"]*as.matrix(myocarde[,1:7]),diag(n),myocarde0[,"PRONO"]), cbind(matrix(0,n,7),diag(n,n),matrix(0,n,1))) C = c(rep(1,n),rep(0,n))
Then we use
constrOptim(theta=p_init, f, grad_f, ui = U,ci = C)
Observe that something is missing here: we need a starting point for the algorithm, θ0. Unfortunately, I could not think of a simple technique to get a valid starting point (that satisfies those linear constraints).
Let us try something else. Because those functions are quite simple: either linear or quadratic. Actually, one can recognize in the separable case, but also in the non-separable case, a classic quadratic programminz∈Rd{12zTDz−dz}subject to Az≥b.
library(quadprog) eps = 5e-4 y = myocarde[,"PRONO"]*2-1 X = as.matrix(cbind(1,myocarde[,1:7])) n = length(y) D = diag(n+7+1) diag(D)[8+0:n] = 0 d = matrix(c(rep(0,7),0,rep(C,n)), nrow=n+7+1) A = Ui b = Ci sol = solve.QP(D+eps*diag(n+7+1), d, t(A), b, meq=1, factorized=FALSE) qpsol = sol$solution (omega = qpsol[1:7]) [1] -0.106642005446 -0.002026198103 -0.022513312261 -0.018958578746 -0.023105767847 -0.018958578746 -1.080638988521 (b = qpsol[n+7+1]) [1] 997.6289927
Given an observation x, the prediction is
y=sign[ωTx+b]
y_pred = 2*((as.matrix(myocarde0[,1:7])%*%omega+b)>0)-1
Observe that here, we do have a classifier, depending if the point lies on the left or on the right (above or below, etc) the separating line (or hyperplane). We do not have a probability, because there is no probabilistic model here. So far.
The dual problem
The Lagrangian of the separable problem could be written introducing Lagrange multipliers α∈Rn, α≥0 asL(ω,b,α)=12‖ω‖2−∑ni=1αi(yi(ωTxi+b)−1)Somehow, αi represents the influence of the observation (yi,xi).
Consider the Dual Problem, with G=[Gij] and Gij=yiyjxTjxi
minα∈Rn{12αTGα−1Tα}
subject to yTα=0 and α≥0.
The Lagrangian of the non-separable problem could be written introducing Lagrange multipliers α,β∈Rn, α,β≥0, and define the Lagrangian L(ω,b,ξ,α,β) as12‖ω‖2+C∑ni=1ξi−∑ni=1αi(yi(ωTxi+b)−1+ξi)−∑ni=1βiξi
Somehow, αi represents the influence of the observation (yi,xi).
The Dual Problem become with G=[Gij] and Gij=yiyjxTjximinα∈Rn{12αTGα−1Tα}
subject to yTα=0, α≥0 and α≤C.
As previsouly, one can also use quadratic programming
library(quadprog) eps = 5e-4 y = myocarde[,"PRONO"]*2-1 X = as.matrix(cbind(1,myocarde[,1:7])) n = length(y) Q = sapply(1:n, function(i) y[i]*t(X)[,i]) D = t(Q)%*%Q d = matrix(1, nrow=n) A = rbind(y,diag(n),-diag(n)) C = .5 b = c(0,rep(0,n),rep(-C,n)) sol = solve.QP(D+eps*diag(n), d, t(A), b, meq=1, factorized=FALSE) qpsol = sol$solution
The two problems are connected in the sense that for all xωTx+b=∑ni=1αiyi(xTxi)+b
To recover the solution of the primal problem,ω=∑ni=1αiyixithus
omega = apply(qpsol*y*X,2,sum) omega 1 FRCAR INCAR INSYS 0.0000000000000002439074265 0.0550138658687635215271960 -0.0920163239049630876653652 0.3609571899422952534486342 PRDIA PAPUL PVENT REPUL -0.1094017965288692356695677 -0.0485213403643276475207813 -0.0660058643191372279579454 0.0010093656567606212794835
while b=y−ωTx (but actually, one can add the constant vector in the matrix of explanatory variables).
More generally, consider the following function (to make sure that D is a definite-positive matrix, we use the nearPD function).
svm.fit = function(X, y, C=NULL) { n.samples = nrow(X) n.features = ncol(X) K = matrix(rep(0, n.samples*n.samples), nrow=n.samples) for (i in 1:n.samples){ for (j in 1:n.samples){ K[i,j] = X[i,] %*% X[j,] }} Dmat = outer(y,y) * K Dmat = as.matrix(nearPD(Dmat)$mat) dvec = rep(1, n.samples) Amat = rbind(y, diag(n.samples), -1*diag(n.samples)) bvec = c(0, rep(0, n.samples), rep(-C, n.samples)) res = solve.QP(Dmat,dvec,t(Amat),bvec=bvec, meq=1) a = res$solution bomega = apply(a*y*X,2,sum) return(bomega) }
On our dataset, we obtain
M = as.matrix(myocarde[,1:7]) center = function(z) (z-mean(z))/sd(z) for(j in 1:7) M[,j] = center(M[,j]) bomega = svm.fit(cbind(1,M),myocarde$PRONO*2-1,C=.5) y_pred = 2*((cbind(1,M)%*%bomega)>0)-1 table(obs=myocarde0$PRONO,pred=y_pred) pred obs -1 1 -1 27 2 1 9 33
i.e. 11 misclassification, out of 71 points (which is also what we got with the logistic regression).
Kernel Based Approach
In some cases, it might be difficult to “separate” by a linear separators the two sets of points, like below,
It might be difficult, here, because which want to find a straight line in the two dimensional space (x1,x2). But maybe, we can distort the space, possible by adding another dimension
That’s heuristically the idea. Because on the case above, in dimension 3, the set of points is now linearly separable. And the trick to do so is to use a kernel. The difficult task is to find the good one (if any).
A positive kernel on X is a function K:X×X→R symmetric, and such that for any n, ∀α1,⋯,αn and ∀x1,⋯,xn,∑ni=1∑nj=1αiαjk(xi,xj)≥0.
For example, the linear kernel is k(xi,xj)=xTixj. That’s what we’ve been using here, so far. One can also define the product kernel k(xi,xj)=κ(xi)⋅κ(xj) where κ is some function X→R.
Finally, the Gaussian kernel is k(xi,xj)=exp[−‖xi−xj‖2].
Since it is a function of ‖xi−xj‖, it is also called a radial kernel.
linear.kernel = function(x1, x2) { return (x1%*%x2) } svm.fit = function(X, y, FUN=linear.kernel, C=NULL) { n.samples = nrow(X) n.features = ncol(X) K = matrix(rep(0, n.samples*n.samples), nrow=n.samples) for (i in 1:n.samples){ for (j in 1:n.samples){ K[i,j] = FUN(X[i,], X[j,]) } } Dmat = outer(y,y) * K Dmat = as.matrix(nearPD(Dmat)$mat) dvec = rep(1, n.samples) Amat = rbind(y, diag(n.samples), -1*diag(n.samples)) bvec = c(0, rep(0, n.samples), rep(-C, n.samples)) res = solve.QP(Dmat,dvec,t(Amat),bvec=bvec, meq=1) a = res$solution bomega = apply(a*y*X,2,sum) return(bomega) }
Link to the regression
To relate this duality optimization problem to OLS, recall that y=xTω+ε, so that ˆy=xTˆω, where ˆω=[XTX]−1XTy
But one can also write y=xTˆω=∑ni=1ˆαi⋅xTxi
where ˆα=X[XTX]−1ˆω, or conversely, ˆω=XTˆα.
Application (on our small dataset)
One can actually use a dedicated R package to run a SVM. To get the linear kernel, use
library(kernlab) df0 = df df0$y = 2*(df$y=="1")-1 SVM1 = ksvm(y ~ x1 + x2, data = df0, C=.5, kernel = "vanilladot" , type="C-svc")
Since the dataset is not linearly separable, there will be some mistakes here
table(df0$y,predict(SVM1)) -1 1 -1 2 2 1 1 5
The problem with that function is that it cannot be used to get a prediction for other points than those in the sample (and I could neither extract ω nor b from the 24 slots of that objet). But it’s possible by adding a small option in the function
SVM2 = ksvm(y ~ x1 + x2, data = df0, C=.5, kernel = "vanilladot" , prob.model=TRUE, type="C-svc")
With that function, we convert the distance as some sort of probability. Someday, I will try to replicate the probabilistic version of SVM, I promise, but today, the goal is just to understand what is done when running the SVM algorithm. To visualize the prediction, use
pred_SVM2 = function(x,y){ return(predict(SVM2,newdata=data.frame(x1=x,x2=y), type="probabilities")[,2])} plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")], cex=1.5,xlab="", ylab="",xlim=c(0,1),ylim=c(0,1)) vu = seq(-.1,1.1,length=251) vv = outer(vu,vu,function(x,y) pred_SVM2(x,y)) contour(vu,vu,vv,add=TRUE,lwd=2,nlevels = .5,col="red")
Here the cost is C=.5, but of course, we can change it
SVM2 = ksvm(y ~ x1 + x2, data = df0, C=2, kernel = "vanilladot" , prob.model=TRUE, type="C-svc") pred_SVM2 = function(x,y){ return(predict(SVM2,newdata=data.frame(x1=x,x2=y), type="probabilities")[,2])} plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")], cex=1.5,xlab="", ylab="",xlim=c(0,1),ylim=c(0,1)) vu = seq(-.1,1.1,length=251) vv = outer(vu,vu,function(x,y) pred_SVM2(x,y)) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5,col="red")
As expected, we have a linear separator. But slightly different. Now, let us consider the “Radial Basis Gaussian kernel”
SVM3 = ksvm(y ~ x1 + x2, data = df0, C=2, kernel = "rbfdot" , prob.model=TRUE, type="C-svc")
Observe that here, we’ve been able to separare the white and the black points
table(df0$y,predict(SVM3)) -1 1 -1 4 0 1 0 6 plot(df$x1,df$x2,pch=c(1,19)[1+(df$y=="1")], cex=1.5,xlab="", ylab="",xlim=c(0,1),ylim=c(0,1)) vu = seq(-.1,1.1,length=251) vv = outer(vu,vu,function(x,y) pred_SVM3(x,y)) contour(vu,vu,vv,add=TRUE,lwd=2,levels = .5,col="red")
Now, to be completely honest, if I understand the theory of the algorithm used to compute ω and b with linear kernel (using quadratic programming), I do not feel confortable with this R function. Especially if you run it several times… you can get (with exactly the same set of parameters)
or
(to be continued…)
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.