Bayes vs. the Invaders! Part Two: Abnormal Distributions
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.
Crossing the Line
This post continues our series on developing statistical models to explore the arcane relationship between UFO sightings and population. The previous post is available here: Bayes vs. the Invaders! Part One: The 37th Parallel.
The simple linear model developed in the previous post is far from satisfying. It makes many unsupportable assumptions about the data and the form of the residual errors from the model. Most obviously, it relies on an underlying Gaussian (or normal) distribution for its understanding of the data. For our count data, some basic features of the Guassian are inappropriate.
Most notably:
- a Gaussian distribution is continuous whilst counts are discrete — you can’t have 2.3 UFO sightings in a given day;
- the Gaussian can produce negative values, which are impossible when dealing with counts — you can’t have a negative number of UFO sightings;
- the Gaussian is symmetrical around its mean value whereas count data is typically skewed.
Moving from the safety and comfort of basic linear regression, then, we will delve into the madness and chaos of generalized linear models that allow us to choose from a range of distributions to describe the relationship between state population and counts of UFO sightings.
Basic Models
We will be working in a Bayesian framework, in which we assign a prior distribution to each parameter that allows, and requires, us to express some prior knowledge about the parameters of interest. These priors are the initial starting points for parameters Afrom which the model moves towards the underlying values as it learns from the data. Choice of priors can have significant effects not only on the outputs of the model, but also its ability to function effectively; as such, it is both an important, but also arcane and subtle, aspect of the Bayesian approach4. For the Poisson model above, the traceplot can be created using the bayesplot
library:
These traceplots exhibit the characteristic insane scribbling of well-mixed chains, often referred to — in, of course, hushed whispers — as a manifestation of a hairy caterpillar; the separate lines representing each chain are clearly overlapping and exploring the same areas. If, by contrast, the lines were largely separated or did not show the same space, there would be reason to believe that our model had become lost and unable to find a coherent voice amongst the myriad babbling murmurs of the data.
A second check on the sanity of the modelling process is to examine the output of the model itself to show the value of the fitted parameters of interest, and some diagnostic information:
fit_ufo_pop_poisson %>% summary(pars=c("a", "b" )) %>% extract2( "summary" ) mean se_mean sd 2.5% 25% 50% 75% 97.5% n_eff Rhat a 5.5199115 7.162684e-03 2.701118e-01 4.99950988 5.33593127 5.51805161 5.70399617 6.05461185 1422.118 1.001581 b 0.0107647 1.728273e-06 6.574476e-05 0.01063562 0.01072192 0.01076418 0.01080788 0.01089496 1447.097 1.001551
For assessment of successful model fit, the Rhat value represents the extent to which the various Markov chains exploring the space, of which there are four by default in Stan, are consistent with each other. As a rule of thumb, a value of Rhat > 1.1
indicates that the model has not converged appropriately and may require a longer set of random sampling iterations, or an improved model. Here, the valuesof Rhat
are close to the ideal value of 1.
As a final step, we should examine how well our model can reproduce the shape of the original data. Models aim to be eerily lifelike parodies of the truth; in a Bayesian framework, and in the Stan language, we can build into the model the ability to draw random samples from the posterior predictive distribution — the set of parameters that the model has learnt from the data — to create new possible values of the outcomes based on the observed inputs. This process can be repeated many times to produce a multiplicity of possible outcomes drawn from model, which we can then visualize to see graphically how well our model fits the observed data.
In the Stan code above, this is created in the generated_quantities
block. When using more convenient libraries such as brms
or rstanarm
, draws from the posterior predictive distribution can be obtained more simply after the model has been fit through a range of helper functions. Here, we undertake the process manually.
We can see, then, how well the Poisson distribution, informed by our selection of priors, has shaped itself to the underlying data.
In the diagram above, the yellow line shows the densities of count values; the blue lines show a sample of twisted mockeries spawned by our piscine approximations. As can be seen, the model has roughly captured the shape of the true distribution, but has notable dissimilarities with the original data.
To appreciate the full horror of what we have wrought, of course, we can plot the predictions of the model against the real data.
This shows an extremely similar line of best fit to that produced from the basic Gaussian model in the previous post. Indeed, a side by side comparison shows that the 95% credible interval around the line are wider in this Poisson-based model5. This most likely reflects the inflexibility of the Poisson distribution given the nature of our data, something that we will discuss and rectify in the next post.
Unsettling Distributions
In this post we have opened our eyes to the weirdly non-linear possibilities of generalised linear models; sealed and bound this concept within the wild philosophy of Bayesian inference; and unleashed the horrifying capacities of Markov Chain Monte Carlo methods and their manifestation in the Stan language.
Applying the Poisson distribution to our records of extraterrestrial sightings, we have seen that we can, to some extent, create a mindless Golem that imperfectly mimics the original data. In the next post, we will delve more deeply into the esoteric possibilities of other distributions for count data, explore ways in which to account for arcane relationships across and between per-state observations, and show how we can compare the effectiveness of different models to select the final glimpse of dread truth that we inadvisably seek.
Footnotes
R-bloggers.com offers daily e-mail updates about R news and tutorials about learning R and many other topics. Click here if you're looking to post or find an R/data-science job.
Want to share your content on R-bloggers? click here if you have a blog, or here if you don't.