[This article was first published on Memo's Island, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Preamble There is a lot of confusion among practitioners regarding the concept of overfitting. It seems like, a kind of an urban legend or a meme, a folklore is circulating in data science or allied fields with the following statement:
Applying cross-validation prevents overfitting and a good out-of-sample performance, low generalisation error in unseen data, indicates not an overfit.
This statement is of course not true: cross-validation does not prevent your model to overfit and good out-of-sample performance does not guarantee not-overfitted model. What actually people refer to in one aspect of this statement is called overtraining. Unfortunately, this meme is not only propagated in industry but in some academic papers as well. This might be at best a confusion on jargon. But, it will be a good practice if we set the jargon right and clear on what do we refer to when we say overfitting, in communicating our results.
Aim In this post, we will give an intuition on why model validation as approximating generalization error of a model fit and detection of overfitting can not be resolved simultaneously on a single model. We will work on a concrete example workflow in understanding overfitting, overtraining and a typical final model building stage after some conceptual introduction. We will avoid giving a reference to the Bayesian interpretations and regularisation and restrict the post to regression and cross-validation. While regularisation has different ramification due to its mathematical properties and prior distributions have different implications in Bayesian statistics. We assume an introductory background in machine learning, so this is not a beginners tutorial.
A recent question from Andrew Gelman, a Bayesian guru, regarding What is overfitting? was one of the reasons why this post is developed along with my frustration to see practitioners being muddy on the meaning of overfitting and continuing recently published data science related technical articles circulating around and even in some academic papers claiming the above statement.
What do we need to satisfy in supervised learning? One of the most basic tasks in mathematics is to find a solution to a function: If we restrict ourselves to real numbers in $n$-dimensions and our domain of interest would be $\mathbb{R}^{n}$. Now imagine set of $p$ points living in this domain $x_{i}$ form a dataset, this is actually a partial solution to a function. The main purpose of modelling is to find an explanation of the dataset, meaning that we need to determine $m$-parameters, $a \in \mathbb{R}^{m}$ which are unknown. (Note that a non-parametric model does not mean no parameters.) Mathematically speaking this manifests as a function as we said before, $f(x, a)$. This modelling is usually called regression, interpolation or supervised learning depending on the literature you are reading. This is a form of an inverse problem, while we don’t know the parameters but we have a partial information regarding variables. The main issue here is ill-posedness, meaning that solutions are not well-posed. Omitting axiomatic technical details, practical problem is that we can find many functions $f(x, a)$ or models, explaining the dataset. So, we seek the following two concepts to be satisfied by our model solution, $f(x, a)=0$.
Aim In this post, we will give an intuition on why model validation as approximating generalization error of a model fit and detection of overfitting can not be resolved simultaneously on a single model. We will work on a concrete example workflow in understanding overfitting, overtraining and a typical final model building stage after some conceptual introduction. We will avoid giving a reference to the Bayesian interpretations and regularisation and restrict the post to regression and cross-validation. While regularisation has different ramification due to its mathematical properties and prior distributions have different implications in Bayesian statistics. We assume an introductory background in machine learning, so this is not a beginners tutorial.
A recent question from Andrew Gelman, a Bayesian guru, regarding What is overfitting? was one of the reasons why this post is developed along with my frustration to see practitioners being muddy on the meaning of overfitting and continuing recently published data science related technical articles circulating around and even in some academic papers claiming the above statement.
What do we need to satisfy in supervised learning? One of the most basic tasks in mathematics is to find a solution to a function: If we restrict ourselves to real numbers in $n$-dimensions and our domain of interest would be $\mathbb{R}^{n}$. Now imagine set of $p$ points living in this domain $x_{i}$ form a dataset, this is actually a partial solution to a function. The main purpose of modelling is to find an explanation of the dataset, meaning that we need to determine $m$-parameters, $a \in \mathbb{R}^{m}$ which are unknown. (Note that a non-parametric model does not mean no parameters.) Mathematically speaking this manifests as a function as we said before, $f(x, a)$. This modelling is usually called regression, interpolation or supervised learning depending on the literature you are reading. This is a form of an inverse problem, while we don’t know the parameters but we have a partial information regarding variables. The main issue here is ill-posedness, meaning that solutions are not well-posed. Omitting axiomatic technical details, practical problem is that we can find many functions $f(x, a)$ or models, explaining the dataset. So, we seek the following two concepts to be satisfied by our model solution, $f(x, a)=0$.
1. Generalized: A model should not depend on the dataset. This step is called model validation.
2. Minimally complex: A model should obey Occam’s razor or principle of parsimony. This step is called model selection.
Figure 1: A workflow for model validation and selection in supervised learning. |
Up to now, we have not named a technique how to check if a model is generalized and selected as the best model. Unfortunately, there is no unique way of doing both and that’s the task of data scientist or quantitative practitioner that requires human judgement.
Model validation: An example One way to check if a model is generalized enough is to come up with a metric on how good it explains the dataset. Our task in model validation is to estimate the model error. For example, root mean square deviation (RMDS) is one metric we can use. If RMSD is low, we could say that our model fit is good, ideally it should be close to zero. But it is not generalized enough if we use the same dataset to measure the goodness-of-fit. We could use different dataset, specially out-of-sample dataset, to validate this as much as we can, i.e. so called hold out method. Out-of-sample is just a fancy way of saying we did not use the same dataset to find the value of parameters $a$. An improved way of doing this is cross-validation. We split our dataset into $k$ partitions, and we obtain $k$ RMDS values to averaged over. This is summarised in Figure 1. Note that, different parameterisation of the same model does not constitute a different model.
Model Selection: Detection of overfitting Overfitting comes into play when we try to satisfy ‘minimally complex model’. This is a comparison problem and we need more than one model to judge if a given model is an overfit. Douglas Hawkins in his classic paper The Problem of Overfitting, states that
Overfitting of models is widely recognized as a concern. It is less recognized however that overfitting is not an absolute but involves a comparison. A model overfits if it is more complex than another model that fits equally well.The important point here what do we mean by complex model, or how can we quantify model complexity? Unfortunately, again there is no unique way of doing this. One of the most used approaches is that a model having more parameters is getting more complex. But this is again a bit of a meme and not generally true. One could actually resort to different measures of complexity. For example, by this definition $f_{1}(a,x)=ax$ and $f_{2}(a,x)=ax^2$ have the same complexity by having the same number of free parameters, but intuitively $f_{2}$ is more complex, while it is nonlinear. There are a lot of information theory based measures of complexity but discussion of those are beyond the scope of our post. For demonstration purposes, we will consider more parameters and degree of nonlinearity as more complex a model.
Figure 2: Simulated data and the non-stochastic part of the data. |
A usual procedure is to generate a synthetic dataset, or simulated dataset, from a model, as a gold standard and use this dataset to build other models. Let’s use the following functional form, from classic text of Bishop, but with an added Gaussian noise $$ f(x) = sin(2\pi x) + \mathcal{N}(0,0.1).$$ We generate large enough set, 100 points to avoid sample size issue discussed in Bishop’s book, see Figure 2. Let’s decide on two models we would like to apply to this dataset in supervised learning task. Note that, we won’t be discussing Bayesian interpretation here, so equivalency of these model under a strong prior assumption is not an issue as we are using this example for ease of demonstrating the concept. A polynomial model of degree $3$ and degree $5$, we call them $g(x)$ and $h(x)$ respectively are used to learn from the simulated data. $$g(x) = a_{0} + a_{1} x + a_{2} x^{2} + a_{3} x^{3}$$ and $$h(x) = b_{0} + b_{1} x + b_{2} x^{2} + b_{3} x^{3} + b_{4} x^{4} + b_{5} x^{5} + b_{6} x^{6}.$$
Figure 3: Overtraining occurs at around after 40 percent of the data usage for g(x). |
Overfitting with low validation error We can also estimate 10-fold cross-validation error, CV-RMSD. For this sampling, g and h have 0.13 and 0.12 CV-RMSD respectively. So as we can see, we have a situation that more complex model reaches similar predictive power with cross validation and we can not distinguish this overfitting by just looking at CV-RMSD value or detecting ‘overtraining’ curve from Figures 4. We need two models to compare, hence both Figure 3 and 4, with both CV-RMSD values. We might argue that in small data sets we might be able tell the difference by looking at test and training error differences, this is exactly how Bishop explains overfitting; where he points out overtraining in small datasets.
Which trained model to deploy? Now the question is, we found out best performing model with minimal complexity empirically. All well, but which trained model should we use in production?
Actualy we have already build the model in model selection. In above case, since we got similar
predictive power from g and h, we obviously will use g, trained on the splitting sweet spot from Figure 3.
Figure 4: Overtraining occurs at around after 30 percent of the data usage for h(x) |
Outlook As more and more people are using techniques from machine learning or inverse problems, both in academia and industry, some key technical concepts are deviated a bit and take different definitions and meaning for different people, due to the fact that people learn some concepts not from reading the literature carefully but from their line managers or senior colleagues verbally. This creates memes which are actually wrong or at least creating lots of confusion in jargon. It is very important for all of us as practitioners that we must question all technical concepts and try to seek origins from the published scientific literature and not rely entirely on verbal explanations from our experienced colleagues. Also, we should strongly avoid ridiculing question from colleagues even they sound too simple, at the end of the day we don’t stop learning and naive looking questions might have very important consequences in fundamentals of the field.
Figure 5: A deployed models h and g on the testing set with the original data. |
The code used in producing the synthetic data, modelling step and visualising the results can be found in github [repo]. In this appendix, we present this R code with detailed comments, but visualisation codes are omitted, they are available in the github repository.
R (GNU S) provides very powerful formula interface. It is probably the most advanced and expressive formula interface in statistical computing, of course along with S.
Above two polynomials can be expressed as formula and as well as a function where we can evaluate.
< !-- HTML generated using hilite.me -->
1 2 3 4 5 6 7 8 9 | #' #' Two polynomial models: g and h, 3rd and 5th degree respectively. #' g_fun <- function(x,params) as.numeric(params[1]+x*params[2]+x^2*params[3]+x^3*params[4]) h_fun <- function(x,params) as.numeric(params[1]+x*params[2]+x^2*params[3]+params[4]*x^3+ params[5]*x^4+params[6]*x^{5}+ params[7]*x^{6}) g_formula <- ysim ~ I(x) + I(x^2) + I(x^3) h_formula <- ysim ~ I(x) + I(x^2) + I(x^3) + I(x^4) + I(x^5) + I(x^6) |
A learning from data will be achieved with lm function from R,
< !-- HTML generated using hilite.me -->
1 2 3 4 5 6 7 8 | #' #' Given data.frame with x and ysim, and an R formula with ysim=f(x), #' fit a linear model #' get_coefficients <- function(data_portion, model_formula) { model <- lm(model_formula, data=data_portion) return(model$coefficients) } |
< !-- HTML generated using hilite.me -->
1 2 3 4 5 6 7 8 9 | #' #' Find the prediction error for a given model_function and model_formula #' lm_rmsd <- function(x_train, y_train, x_test, y_test, model_function, model_formula) { params <- get_coefficients(data.frame(x=x_train,ysim=y_train), model_formula) params[as.numeric(which(is.na(params)))] <- 0 # if there is any co-linearity f_hat <- sapply(x_test, model_function, params=params) return(sqrt(sum((f_hat-y_test)^2)/length(f_hat))) } |
We can generate a simulated data as we discussed above by using runif.
< !-- HTML generated using hilite.me -->
1 2 3 4 5 6 7 8 9 10 11 12 13 | #' #' Generate a synthetic dataset #' A similar model from Bishop : #' #' f(x) = sin(2pi*x) + N(0, 0.1) #' set.seed(424242) f <- function(x) return(sin(2*pi*x)) fsim <- function(x) return(sin(2*pi*x)+rnorm(1,0,0.1)) x <- seq(0,1,1e-2) y <- sapply(x,f) ysim <- sapply(x,fsim) simdata <- data.frame(x=x, y=y, ysim=ysim) |
< !-- HTML generated using hilite.me -->
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 | #' #' Demonstration of overtraining with g #' #' set.seed(424242) model_function <- g_fun model_formula <- g_formula split_percent <- seq(0.05,0.95,0.03) split_len <- length(split_percent) data_len <- length(simdata$ysim) splits <- as.integer(data_len*split_percent) test_rmsd <- vector("integer", split_len-1) train_rmsd <- vector("integer", split_len-1) for(i in 2:split_len) { train_ix <- sample(1:data_len,splits[i-1]) test_ix <- (1:data_len)[-train_ix] train_rmsd[i-1] <- lm_rmsd(simdata$x[train_ix], simdata$ysim[train_ix], simdata$x[train_ix], simdata$ysim[train_ix], model_function, model_formula) test_rmsd[i-1] <- lm_rmsd(simdata$x[train_ix], simdata$ysim[train_ix], simdata$x[test_ix], simdata$ysim[test_ix], model_function, model_formula) } rmsd_df <- data.frame(test_rmsd=test_rmsd, train_rmsd=train_rmsd, percent=split_percent[-1]) rmsd_df2 <- melt(rmsd_df, id=c("percent")) colnames(rmsd_df2) <- c("percent", "Error_on", "rmsd") rmsd_df2$test_train <- as.factor(rmsd_df2$Error_on) |
< !-- HTML generated using hilite.me -->
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | #' CV for g(x) and h(x) split_percent <- seq(0,1,0.1) split_len <- length(split_percent) data_len <- length(simdata$ysim) splits <- as.integer(data_len*split_percent) cv_rmsd_g <- 0 cv_rmsd_h <- 0 for(i in 2:split_len) { # 10-fold cross validation test_ix <- (splits[i-1]+1):splits[i] train_ix <- (1:data_len)[-test_ix] x_train <- simdata$x[train_ix] y_train <- simdata$ysim[train_ix] x_test <- simdata$x[test_ix] y_test <- simdata$ysim[test_ix] cv_rmsd_g <- lm_rmsd(x_train, y_train, x_test, y_test, g_fun, g_formula)+cv_rmsd_g cv_rmsd_h <- lm_rmsd(x_train, y_train, x_test, y_test, h_fun, h_formula)+cv_rmsd_h } cat("10-fold CV error G = ", cv_rmsd_g/split_len,"\n") # 0.1304164 cat("10-fold CV error H = ", cv_rmsd_h/split_len,"\n") # 0.1206458 |
To leave a comment for the author, please follow the link and comment on their blog: Memo's Island.
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.