12.2 Assumptions of linear models

Wait, what? I thought we were talking about GLMs in this chapter? We are. The first thing you need to know is that linear models are just a special case of the GLM. That is, the linear model assumes a certain error distribution (the normal) that helps things work smoothly and correctly. The next few weeks of class are all about relaxing the assumptions of linear models so we can actually use them in the real world.

Let’s take another look at the assumptions of linear models:

Here are the basic assumptions that we explicitly make when we use linear models, just in case you’ve forgotten them:

  1. Residuals are normally distributed with a mean of zero

  2. Independence of observations (residuals)

  3. Homogeneity of variances

  4. Linear(izeable) relationship between X and Y

12.2.1 Assumption 1: normality of residuals

We’ve seen these before, but let’s recap. For assumption 1, we are assuming a couple of implicit things: 1. The variable is continuous (it must be if it’s error structure is normal), and 2. The error in our model is normally distributed. In reality, this is probably the least important assumption of linear models, and really only matters if we are trying to make predictions from the models that we make. Of course, we are often concerned with making predictions from the models that we make, so we can see why this might be important. However, more often we are in extreme violation of this assumption in some combination with assumption 4 above to such a degree that it actually does matter. For example, a response variable that is binomial (1 or zero) or multinomial in nature cannot possibly have normally distributed errors with respect to x unless there is absolutely no relationship between X and Y, right? So, if we wanted to predict the probability of patients dying from some medical treatment, or the presence/absence of species across a landscape then we can’t use the linear models we’ve been using up until now.

12.2.2 Assumption 2: independence of observations

This time we’ll separate assumption 2 into two components: collinearity and autocorrelation of errors. Remember that the manifestation of these problems is in the precision of our coefficient estimates, and this has the potential to change the Type-I/II error rates in our models, causing us to draw false conclusions about which variables are important. As we discussed earlier in the course we expect to see some collinearity between observations, and we can deal with balancing this in our modeling through the use of model selection techniques to reduce Type-I and Type-II error. In the next couple of weeks, we will examine tools that will help us determine whether or not collinearity is actually causing problems in our models that go beyond minor nuisances. As for the second part, autocorrelation, we can actually use formulations of the GLM that use ‘generalized least squares’ to include auto-regressive correlation matrices in our analysis that will allow us to relax this assumption of linear models and improve the precision of our parameter estimates. Well, we could, we won’t do that here.

12.2.3 Assumption 3: homogeneity of variances

Previously, we looked at ways to reduce this issue by introducing categorical explanatory variables to our models. During the coming weeks, we will look at models that allow us to relax this assumption further through the use of weighted least squares and random effects, which can be applied to a wide range of regression methods from linear models to GLMs and GLMMs in Chapter 14 and Chapter 15.

12.2.4 Assumption 4: linearity and additivity

We’ve already looked at a couple of ways to deal with violations of these two assumptions such as data transformation and/or polynomial formulations of the linear model. We will continue to apply these concepts during the next several weeks.