16.8 Model selection

Let’s say we now want to say something about statistical significance of our effects. There are a couple of ways in which we could do this. First, we could use model selection just like we demonstrated in Chapter 11.5…sort of. Instead of using AIC (or DIC or BIC or WAIC) this time we will use a leave-one-out cross-validation information criterion, LOO-IC from the loo package (see web vignette here).

But first, we’ll need another model to which we can compare cray_model. Just as we did for lm() and glm(), we can create a null model against which we can evaluate the performance of cray_model to see if including length as a predictor improves our understanding of variability in crayfish mass.

null_model <- stan_glm(formula = logmass ~ 1,
         family = gaussian,
         data = cray
         )

Beautiful, now we can do some cross validation. This will take a hot minute!

On the Windows computer I am using, I either need to use a single core for this due to a bug in the version of loo that I am using, or I need to extract the model likelihoods manually with the log_lik function until I update my version.

Extract the log likelihoods for each of the model like so:

null_lik <- log_lik(null_model)
cray_lik <- log_lik(cray_model)

Now, we can conduct leave-one-out cross validation on the fitted model posteriors and our data using the loo() function. Wow, that is way easier than writing this out by hand, and way faster than re-running the model for each data point!

null_loo <- loo(null_lik, k_threshold = 0.70, cores = 7)
cray_loo <- loo(cray_lik, k_threshold = 0.70, cores = 7)

And, finally, we compare the models:

loo_compare(null_loo, cray_loo)
##        elpd_diff se_diff
## model2     0.0       0.0
## model1 -1714.1      70.0

Based on the same rules of thumb that we used with AIC, we can see that the relationship between loglength and logmass is better supported than the null model, so we can reject the null. No surprise there, but hopefully a useful demonstration.