Skip to content

Latest commit

 

History

History
505 lines (399 loc) · 17.2 KB

ohio.md

File metadata and controls

505 lines (399 loc) · 17.2 KB

Binary response GLMM

Julian Faraway 1/6/23

See the introduction for an overview.

This example is discussed in more detail in the book Bayesian Regression Modeling with INLA

Packages used:

library(ggplot2)
library(lme4)
library(INLA)
library(knitr)
library(brms)
library(mgcv)

Data and Model

In Fitzmaurice and Laird, 1993, data on 537 children aged 7–10 in six Ohio cities are reported. The response is binary — does the child suffer from wheezing (indication of a pulmonary problem) where one indicates yes and zero no. This status is reported for each of four years at ages 7, 8, 9 and 10. There is also an indicator variable for whether the mother of the child is a smoker. Because we have four binary responses for each child, we expect these to be correlated and our model needs to reflect this.

We sum the number of smoking and non-smoking mothers:

data(ohio, package="brinla")
table(ohio$smoke)/4
  0   1 
350 187 

We use this to produce the proportion of wheezing children classified by age and maternal smoking status:

xtabs(resp ~ smoke + age, ohio)/c(350,187)
     age
smoke      -2      -1       0       1
    0 0.16000 0.14857 0.14286 0.10571
    1 0.16578 0.20856 0.18717 0.13904

Age has been adjusted so that nine years old is zero. We see that wheezing appears to decline with age and that there may be more wheezing in children with mothers who smoke. But the effects are not clear and we need modeling to be sure about these conclusions.

A plausible model uses a logit link with a linear predictor of the form: $$ \eta_{ij} = \beta_0 + \beta_1 age_j + \beta_2 smoke_i + u_i, \quad i=1, \dots ,537, \quad j=1,2,3,4, $$ with $$ P(Y_{ij} = 1) = {\exp(\eta_{ij}) \over 1+\exp(\eta_{ij})}. $$ The random effect $u_i$ models the propensity of child $i$ to wheeze. Children are likely to vary in their health condition and this effect enables us to include this unknown variation in the model. Because $u_i$ is added to all four observations for a child, we induce a positive correlation among the four responses as we might naturally expect. The response is Bernoulli or, in other words, binomial with trial size one.

LME4

Here is the model fit penalized quasi-likelihood using the lme4 package:

modagh <- glmer(resp ~ age + smoke + (1|id), nAGQ=25, 
              family=binomial, data=ohio)
summary(modagh, correlation = FALSE)
Generalized linear mixed model fit by maximum likelihood (Adaptive Gauss-Hermite Quadrature, nAGQ = 25) ['glmerMod']
 Family: binomial  ( logit )
Formula: resp ~ age + smoke + (1 | id)
   Data: ohio

     AIC      BIC   logLik deviance df.resid 
  1603.3   1626.0   -797.6   1595.3     2144 

Scaled residuals: 
   Min     1Q Median     3Q    Max 
-1.373 -0.201 -0.177 -0.149  2.508 

Random effects:
 Groups Name        Variance Std.Dev.
 id     (Intercept) 4.69     2.16    
Number of obs: 2148, groups:  id, 537

Fixed effects:
            Estimate Std. Error z value Pr(>|z|)
(Intercept)  -3.1015     0.2191  -14.16   <2e-16
age          -0.1756     0.0677   -2.60   0.0095
smoke         0.3986     0.2731    1.46   0.1444

We see that there is no significant effect due to maternal smoking.

Suppose you do not take into account the correlated response within the individuals and fit a GLM ignoring the ID random effect:

modglm <- glm(resp ~ age + smoke, family=binomial, data=ohio)
faraway::sumary(modglm)
            Estimate Std. Error z value Pr(>|z|)
(Intercept)  -1.8837     0.0838   -22.5   <2e-16
age          -0.1134     0.0541    -2.1    0.036
smoke         0.2721     0.1235     2.2    0.028

n = 2148 p = 3
Deviance = 1819.889 Null Deviance = 1829.089 (Difference = 9.199) 

We see that the effect of maternal smoking is significant (but this would be the incorrect conclusion).

INLA

Integrated nested Laplace approximation is a method of Bayesian computation which uses approximation rather than simulation. More can be found on this topic in Bayesian Regression Modeling with INLA and the chapter on GLMMs

We can fit this model in INLA as:

formula <- resp ~ age + smoke + f(id, model="iid")
imod <- inla(formula, family="binomial", data=ohio)

The id variable represents the child and we use an iid model indicating that the $u_i$ variables should be independent and identically distributed between children. A summary of the posteriors for the fixed effect components can be obtained as:

imod$summary.fixed |> kable()
mean sd 0.025quant 0.5quant 0.975quant mode kld
(Intercept) -2.92023 0.19075 -3.31025 -2.91443 -2.56275 -2.90295 0
age -0.17157 0.06271 -0.29482 -0.17148 -0.04883 -0.17130 0
smoke 0.38205 0.23645 -0.08007 0.38126 0.84862 0.37971 0

The posterior means are similar to the PQL estimates. We can get plots of the posteriors of the fixed effects:

fnames = names(imod$marginals.fixed)
par(mfrow=c(1,2))
for(i in 2:3){
  plot(imod$marginals.fixed[[i]],
       type="l",
       ylab="density",
       xlab=fnames[i])
  abline(v=0)
}
par(mfrow=c(1,1))

Figure 1: Posterior densities of the fixed effects model for the Ohio wheeze data.

Figure 1: Posterior densities of the fixed effects model for the Ohio wheeze data.

We can also see the summary for the random effect SD:

hpd = inla.tmarginal(function(x) 1/sqrt(x), imod$marginals.hyperpar[[1]])
inla.zmarginal(hpd)
Mean            1.90247 
Stdev           0.148106 
Quantile  0.025 1.62102 
Quantile  0.25  1.79994 
Quantile  0.5   1.89932 
Quantile  0.75  2.00381 
Quantile  0.975 2.19165 

Again the result is similar to the PQL output although notice that INLA provides some assessment of uncertainty in this value in contrast to the PQL result. We can also see the posterior density:

plot(hpd,type="l",xlab="linear predictor",ylab="density")

Figure 2: Posterior density of the SD of id

Figure 2: Posterior density of the SD of id

BRMS

BRMS stands for Bayesian Regression Models with STAN. It provides a convenient wrapper to STAN functionality.

Fitting the model is very similar to lmer as seen above. There is a bernoulli option for the family which is appropriate for a 0-1 response.

bmod <- brm(resp ~ age + smoke + (1|id), family=bernoulli(), data=ohio, cores = 4)

We can check the MCMC diagnostics and the posterior densities with:

plot(bmod)

Looks quite similar to the INLA results.

We can look at the STAN code that brms used with:

stancode(bmod)
// generated with brms 2.18.0
functions {
}
data {
  int<lower=1> N;  // total number of observations
  int Y[N];  // response variable
  int<lower=1> K;  // number of population-level effects
  matrix[N, K] X;  // population-level design matrix
  // data for group-level effects of ID 1
  int<lower=1> N_1;  // number of grouping levels
  int<lower=1> M_1;  // number of coefficients per level
  int<lower=1> J_1[N];  // grouping indicator per observation
  // group-level predictor values
  vector[N] Z_1_1;
  int prior_only;  // should the likelihood be ignored?
}
transformed data {
  int Kc = K - 1;
  matrix[N, Kc] Xc;  // centered version of X without an intercept
  vector[Kc] means_X;  // column means of X before centering
  for (i in 2:K) {
    means_X[i - 1] = mean(X[, i]);
    Xc[, i - 1] = X[, i] - means_X[i - 1];
  }
}
parameters {
  vector[Kc] b;  // population-level effects
  real Intercept;  // temporary intercept for centered predictors
  vector<lower=0>[M_1] sd_1;  // group-level standard deviations
  vector[N_1] z_1[M_1];  // standardized group-level effects
}
transformed parameters {
  vector[N_1] r_1_1;  // actual group-level effects
  real lprior = 0;  // prior contributions to the log posterior
  r_1_1 = (sd_1[1] * (z_1[1]));
  lprior += student_t_lpdf(Intercept | 3, 0, 2.5);
  lprior += student_t_lpdf(sd_1 | 3, 0, 2.5)
    - 1 * student_t_lccdf(0 | 3, 0, 2.5);
}
model {
  // likelihood including constants
  if (!prior_only) {
    // initialize linear predictor term
    vector[N] mu = rep_vector(0.0, N);
    mu += Intercept;
    for (n in 1:N) {
      // add more terms to the linear predictor
      mu[n] += r_1_1[J_1[n]] * Z_1_1[n];
    }
    target += bernoulli_logit_glm_lpmf(Y | Xc, mu, b);
  }
  // priors including constants
  target += lprior;
  target += std_normal_lpdf(z_1[1]);
}
generated quantities {
  // actual population-level intercept
  real b_Intercept = Intercept - dot_product(means_X, b);
}

We can see that some half-t distributions are used as priors for the hyperparameters.

We examine the fit:

summary(bmod)
 Family: bernoulli 
  Links: mu = logit 
Formula: resp ~ age + smoke + (1 | id) 
   Data: ohio (Number of observations: 2148) 
  Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1;
         total post-warmup draws = 4000

Group-Level Effects: 
~id (Number of levels: 537) 
              Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
sd(Intercept)     2.20      0.19     1.86     2.60 1.00     1045     1569

Population-Level Effects: 
          Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
Intercept    -3.12      0.22    -3.57    -2.71 1.00     1606     2073
age          -0.18      0.07    -0.31    -0.04 1.00     5301     3086
smoke         0.40      0.28    -0.15     0.93 1.00     1580     2515

Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
and Tail_ESS are effective sample size measures, and Rhat is the potential
scale reduction factor on split chains (at convergence, Rhat = 1).

The results are consistent with previous results.

MGCV

It is possible to fit some GLMMs within the GAM framework of the mgcv package. An explanation of this can be found in this blog

We need to make a factor version of id otherwise it gets treated as a numerical variable.

ohio$fid = factor(ohio$id)
gmod = gam(resp ~ age + smoke + s(fid,bs="re"), 
           family=binomial, data=ohio, method="REML")

and look at the summary output:

summary(gmod)
Family: binomial 
Link function: logit 

Formula:
resp ~ age + smoke + s(fid, bs = "re")

Parametric coefficients:
            Estimate Std. Error z value Pr(>|z|)
(Intercept)  -2.3690     0.1508  -15.71   <2e-16
age          -0.1523     0.0627   -2.43    0.015
smoke         0.2956     0.2405    1.23    0.219

Approximate significance of smooth terms:
       edf Ref.df Chi.sq p-value
s(fid) 282    535    548  <2e-16

R-sq.(adj) =  0.393   Deviance explained = 46.1%
-REML = 814.01  Scale est. = 1         n = 2148

We get the fixed effect estimates. We also get a test on the random effect (as described in this article). The hypothesis of no variation between the ids is rejected.

We can get an estimate of the id SD:

gam.vcomp(gmod)
Standard deviations and 0.95 confidence intervals:

       std.dev  lower  upper
s(fid)  1.9486 1.6539 2.2957

Rank: 1/1

which is the same as the REML estimate from lmer earlier.

The random effect estimates for the fields can be found with:

head(coef(gmod))
(Intercept)         age       smoke    s(fid).1    s(fid).2    s(fid).3 
   -2.36900    -0.15226     0.29557    -0.72026    -0.72026    -0.72026 

GINLA

In Wood (2019), a simplified version of INLA is proposed. The first construct the GAM model without fitting and then use the ginla() function to perform the computation.

gmod = gam(resp ~ age + smoke + s(fid,bs="re"), 
           family=binomial, data=ohio, fit = FALSE)
gimod = ginla(gmod)

We get the posterior densities for the fixed effects as:

par(mfrow=c(1,2))
for(i in 2:3){
plot(gimod$beta[i,],gimod$density[i,],type="l",
     xlab=gmod$term.names[i],ylab="density")
}
par(mfrow=c(1,1))

Figure 3: Posteriors of the fixed effects

Figure 3: Posteriors of the fixed effects

It is not straightforward to obtain the posterior densities of the hyperparameters.

Discussion

  • No strong differences in the results between the different methods. In all cases, we do not find strong evidence of an effect for maternal smoking.

  • LME4 was very fast. INLA was fast. BRMS, MGCV and GINLA were slower. We have a large number of subject random effects which slows down the mgcv approach considerably.

Package version info

sessionInfo()
R version 4.2.1 (2022-06-23)
Platform: x86_64-apple-darwin17.0 (64-bit)
Running under: macOS Big Sur ... 10.16

Matrix products: default
BLAS:   /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRblas.0.dylib
LAPACK: /Library/Frameworks/R.framework/Versions/4.2/Resources/lib/libRlapack.dylib

locale:
[1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8

attached base packages:
[1] parallel  stats     graphics  grDevices utils     datasets  methods   base     

other attached packages:
 [1] mgcv_1.8-41   nlme_3.1-161  brms_2.18.0   Rcpp_1.0.9    knitr_1.41    INLA_22.12.16 sp_1.5-1      foreach_1.5.2
 [9] lme4_1.1-31   Matrix_1.5-3  ggplot2_3.4.0

loaded via a namespace (and not attached):
  [1] minqa_1.2.5          colorspace_2.0-3     ellipsis_0.3.2       markdown_1.4         base64enc_0.1-3     
  [6] rstudioapi_0.14      Deriv_4.1.3          farver_2.1.1         rstan_2.26.13        MatrixModels_0.5-1  
 [11] DT_0.26              fansi_1.0.3          mvtnorm_1.1-3        bridgesampling_1.1-2 codetools_0.2-18    
 [16] splines_4.2.1        shinythemes_1.2.0    bayesplot_1.10.0     jsonlite_1.8.4       nloptr_2.0.3        
 [21] shiny_1.7.4          compiler_4.2.1       backports_1.4.1      assertthat_0.2.1     fastmap_1.1.0       
 [26] cli_3.5.0            later_1.3.0          htmltools_0.5.4      prettyunits_1.1.1    tools_4.2.1         
 [31] igraph_1.3.5         coda_0.19-4          gtable_0.3.1         glue_1.6.2           reshape2_1.4.4      
 [36] dplyr_1.0.10         posterior_1.3.1      V8_4.2.2             vctrs_0.5.1          svglite_2.1.0       
 [41] iterators_1.0.14     crosstalk_1.2.0      tensorA_0.36.2       xfun_0.36            stringr_1.5.0       
 [46] ps_1.7.2             mime_0.12            miniUI_0.1.1.1       lifecycle_1.0.3      gtools_3.9.4        
 [51] MASS_7.3-58.1        zoo_1.8-11           scales_1.2.1         colourpicker_1.2.0   promises_1.2.0.1    
 [56] Brobdingnag_1.2-9    faraway_1.0.9        inline_0.3.19        shinystan_2.6.0      yaml_2.3.6          
 [61] curl_4.3.3           gridExtra_2.3        loo_2.5.1            StanHeaders_2.26.13  stringi_1.7.8       
 [66] highr_0.10           dygraphs_1.1.1.6     checkmate_2.1.0      boot_1.3-28.1        pkgbuild_1.4.0      
 [71] systemfonts_1.0.4    rlang_1.0.6          pkgconfig_2.0.3      matrixStats_0.63.0   distributional_0.3.1
 [76] evaluate_0.19        lattice_0.20-45      labeling_0.4.2       rstantools_2.2.0     htmlwidgets_1.6.0   
 [81] tidyselect_1.2.0     processx_3.8.0       plyr_1.8.8           magrittr_2.0.3       R6_2.5.1            
 [86] generics_0.1.3       DBI_1.1.3            pillar_1.8.1         withr_2.5.0          xts_0.12.2          
 [91] abind_1.4-5          tibble_3.1.8         crayon_1.5.2         utf8_1.2.2           rmarkdown_2.19      
 [96] grid_4.2.1           callr_3.7.3          threejs_0.3.3        digest_0.6.31        xtable_1.8-4        
[101] httpuv_1.6.7         RcppParallel_5.1.5   stats4_4.2.1         munsell_0.5.0        shinyjs_2.1.0