Site icon R-bloggers

Statistics without Maths

[This article was first published on mikeksmith's posterous, and kindly contributed to R-bloggers]. (You can report issue about the content on this page here)
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.

I got an interesting message from Chris Atherton the other day who has offered to do a workshop at the Technical Communications UK conference on statistics and data visualisation. The problem is that for some tech writers, their understanding of statistics is limited and she wanted to do a primer that wouldn’t be too scary for them while still engaging the more quantitative members of the audience. How to meet the needs of both groups? 

My idea is that you could motivate and teach statistics without having to resort to equations and mathematics. How? Lots of graphs. Like this:

See the full gallery on Posterous

Let’s look at a sample of 10 observations. You can see the spread in the data. Since I generated this data, I know that the mean should be 0. For this sample it looks quite different. People sometimes get confused about the difference between standard deviation and standard error (of the mean). Standard deviation talks about the spread of the observed data. Roughly speaking, 2 standard deviations ought to cover 95% of the data – so most of the data should lie between the outer vertical red lines in the second graph.

See the full gallery on Posterous

If we increase the number of observations, we get a much better handle on what the mean value actually is. (Large samples give better confidence than small samples for making inferences). By the time we get to 1000 samples we can see that we the estimated mean and standard deviation really do match what I used to generate the data (mean = 0, std dev = 1).

Our sample of 10 observations was only ONE trial though. If we repeat the exercise 10 times then we can see how variable the data is between trials, and that the mean changes from one trial to the next.

See the full gallery on Posterous

Going back to the first sample, we can construct the standard error of the mean, which tells us how certain we are about the estimate of the mean based on this data. (Actually in this example the interval estimate for the mean excludes zero, even though the true value is zero).

Bigger samples reduce our uncertainty in the mean, so the standard error of the mean is smaller and the interval estimate for the mean is narrower.

See the full gallery on Posterous

Now imagine we had two mechanisms generating the data so that what differs between them is the location of the distribution. We can estimate the means and look at how different they are. Of course, we now know that larger samples give better characterisation of the distribution and that results may differ across trials for the same sample size. This is true here too.

Finally, if we wanted to formally say whether the difference between populations that we see here is statistically significant what we need to do is to find out how likely this observation would be if there was NO difference between populations. So we take 100 trials of 10 samples from two populations both centred on zero, order these by how different the two samples are and see where our sample of two different populations lies. In this case there’s no trial where the difference is bigger than we see between the two samples of 10 observations. That means the p-value is <0.01.

See the full gallery on Posterous

So. Did you get all that?

The lesson for the more quantitative folks? How you can describe stats without maths.

[Code]

Stats_without_maths.doc Download this file

Permalink | Leave a comment  »

To leave a comment for the author, please follow the link and comment on their blog: mikeksmith's posterous.

R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.