class: center, middle, inverse, title-slide .title[ # Quantitative Methods for LEL ] .subtitle[ ## Week 8 - Multiple predictors and interactions ] .author[ ### Stefano Coretta and Elizabeth Pankratz ] .institute[ ### University of Edinburgh ] .date[ ### 2023/11/07 ] --- ## Summary from last week .bg-washed-blue.b--dark-blue.ba.bw2.br3.shadow-5.ph4.mt2[ - **Binary outcomes** are outcome variables with two discrete levels (e.g., yes/no, grammatical/ungrammatical, correct/incorrect). - We can visualise binary outcomes as **proportions**. - Binary outcomes follow the Bernoulli distribution which has one parameter, `\(p\)`, the probability of "success" (`family = bernoulli()`). - A Bernoulli model returns estimates in log-odds. We can convert log-odds into probabilities with the logistic function (`plogis()`). - A Bayesian Credible Interval (CrI) tells you the region in which, with some probability (like 95%, 60%, 73%, etc), the true value lies. ] --- <iframe allowfullscreen frameborder="0" height="100%" mozallowfullscreen style="min-width: 500px; min-height: 355px" src="https://app.wooclap.com/events/SQQFXB/questions/65436cf3c1a55c4249b6d94e" width="100%"></iframe> --- layout: true ## The `shallow` data: Relation type and Branching --- .bg-washed-blue.b--dark-blue.ba.bw2.br3.shadow-5.ph4.mt2[ **Lexical decision task:** Is the string you see a word in English or not? - Outcome variable: **Accuracy** (incorrect vs correct). - Predictors: - **Relation_type**: unrelated, non-constituent, constituent. - **Branching**: left ([[un-[ripe]]-ness]), right ([un-[[grace]-ful]]). ] --- ``` ## # A tibble: 865 × 5 ## Group ID Accuracy Relation_type Branching ## <chr> <chr> <fct> <chr> <chr> ## 1 L1 L1_01 correct Unrelated Left ## 2 L1 L1_01 correct Constituent Left ## 3 L1 L1_01 correct Unrelated Left ## 4 L1 L1_01 correct Constituent Left ## 5 L1 L1_01 incorrect Unrelated Left ## 6 L1 L1_01 correct Unrelated Right ## 7 L1 L1_01 correct Constituent Right ## 8 L1 L1_01 correct NonConstituent Left ## 9 L1 L1_01 correct NonConstituent Left ## 10 L1 L1_01 correct Constituent Left ## # ℹ 855 more rows ``` --- <img src="index_files/figure-html/shallow-plot-1.png" width="60%" style="display: block; margin: auto;" /> ??? *reobtainable*: left, *undeniable*: right --- layout: false ## Factorial design **Two-by-two factorial design** .center[ | | B = B1 | B = B2 | |-------- |-------- |-------- | | **A = A1** | A1, B1 | A1, B2 | | **A = A2** | A2, B1 | A2, B2 | ] -- <br> (Let's filter the data so we exclude the non-constituent cases) | | Branching = left | Branching = right | |------------------------|-------------------|--------------------| | Relation = unrelated | unrelated, left | unrelated, right | | Relation = constituent | constituent, left | constituent, right | -- <br> .bg-washed-blue.b--dark-blue.ba.bw2.br3.shadow-5.ph4.mt2[ How can we include **both predictors** in the model? ] --- layout: true ## Multiple predictors --- Last week's model with two dummy variables (for a three-level predictor): <br> .f3[ `$$logit(p) = \beta_0 + (\beta_1 \cdot relation_{ncons}) + (\beta_2 \cdot relation_{cons})$$` ] --- Let's wrangle the data. ```r shallow <- shallow %>% filter(Relation_type != "NonConstituent") %>% mutate( Relation_type = factor(Relation_type, levels = c("Unrelated", "Constituent")), Branching = factor(Branching, levels = c("Left", "Right")), Accuracy = factor(Accuracy, levels = c("incorrect", "correct")) ) ``` --- <img src="index_files/figure-html/barplot-shallow-1.png" width="60%" style="display: block; margin: auto;" /> --- | | Branching = left | Branching = right | |------------------------|-------------------|--------------------| | Relation = unrelated | unrelated, left | unrelated, right | | Relation = constituent | constituent, left | constituent, right | -- <br> | Relation_type | Branching | |--------------- |----------- | | Unrelated | Left | | Unrelated | Right | | Constituent | Left | | Constituent | Right | --- <!-- my condolences to anyone trying to read this --> | When: | Then coded as: | | --- | --- | | <table> <thead> <tr> <th>Relation_type</th> <th>Branching</th> </tr> </thead> <tbody> <tr> <td>Unrelated</td> <td>Left</td> </tr> <tr> <td>Unrelated</td> <td>Right</td> </tr> <tr> <td>Constituent</td> <td>Left</td> </tr> <tr> <td>Constituent</td> <td>Right</td> </tr> </tbody> </table> | <table> <thead> <tr> <th>Relation_type</th> <th>Branching</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> </tr> <tr> <td>0</td> <td>1</td> </tr> <tr> <td>1</td> <td>0</td> </tr> <tr> <td>1</td> <td>1</td> </tr> </tbody> </table> | <!-- Here's how those two ensted tables look in markdown --> <!-- | Relation_type | Branching | --> <!-- |--------------- |----------- | --> <!-- | Unrelated | Left | --> <!-- | Unrelated | Right | --> <!-- | Constituent | Left | --> <!-- | Constituent | Right | --> <!-- | Relation_type | Branching | --> <!-- |--------------- |----------- | --> <!-- | 0 | 0 | --> <!-- | 0 | 1 | --> <!-- | 1 | 0 | --> <!-- | 1 | 1 | --> <br> -- Let's verify that this coding is what R will use: .pull-left[ ```r contrasts(shallow$Relation_type) ``` ``` ## Constituent ## Unrelated 0 ## Constituent 1 ``` ] .pull-right[ ```r contrasts(shallow$Branching) ``` ``` ## Right ## Left 0 ## Right 1 ``` ] --- | When: | Then coded as: | | --- | --- | | <table> <thead> <tr> <th>Relation_type</th> <th>Branching</th> </tr> </thead> <tbody> <tr> <td>Unrelated</td> <td>Left</td> </tr> <tr> <td>Unrelated</td> <td>Right</td> </tr> <tr> <td>Constituent</td> <td>Left</td> </tr> <tr> <td>Constituent</td> <td>Right</td> </tr> </tbody> </table> | <table> <thead> <tr> <th>Relation_type</th> <th>Branching</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> </tr> <tr> <td>0</td> <td>1</td> </tr> <tr> <td>1</td> <td>0</td> </tr> <tr> <td>1</td> <td>1</td> </tr> </tbody> </table> | <br> `$$logit(p) = \beta_0 + (\beta_1 \cdot relation) + (\beta_2 \cdot branch)$$` <br> -- $$ `\begin{aligned} \text{Unrelated, Left:} & & \beta_0 &+ (\beta_1 \cdot 0) + (\beta_2 \cdot 0) &&= \beta_0 &\\ \text{Unrelated, Right:} & & \beta_0 &+ (\beta_1 \cdot 0) + (\beta_2 \cdot 1) &&= \beta_0 + \beta_2 &\\ \text{Constituent, Left:} & & \beta_0 &+ (\beta_1 \cdot 1) + (\beta_2 \cdot 0) &&= \beta_0 + \beta_1 &\\ \text{Constituent, Right:} & & \beta_0 &+ (\beta_1 \cdot 1) + (\beta_2 \cdot 1) &&= \beta_0 + \beta_1 + \beta_2 &\\ \end{aligned}` $$ --- $$ `\begin{aligned} \text{acc} & \sim Bernoulli(p) \\ logit(p) & = \beta_0 + \beta_1 \cdot relation + \beta_2 \cdot branch \\ \beta_0 & \sim Gaussian(\mu_0, \sigma_0) \\ \beta_1 & \sim Gaussian(\mu_1, \sigma_1) \\ \beta_2 & \sim Gaussian(\mu_2, \sigma_2) \\ \end{aligned}` $$ -- ```r acc_mult_bm <- brm( Accuracy ~ Relation_type + Branching, family = bernoulli(), data = shallow, backend = 'cmdstanr', file = 'data/cache/acc_mult_bm' ) ``` --- ```r summary(acc_mult_bm) ``` ``` ## Family: bernoulli ## Links: mu = logit ## Formula: Accuracy ~ Relation_type + Branching ## Data: shallow (Number of observations: 692) ## Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1; ## total post-warmup draws = 4000 ## ## Population-Level Effects: ## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS ## Intercept 1.08 0.17 0.75 1.43 1.00 4499 3212 ## Relation_typeConstituent 0.69 0.25 0.20 1.17 1.00 2877 2567 ## BranchingRight 1.44 0.27 0.91 1.99 1.00 2465 2441 ## ## Draws were sampled using sample(hmc). For each parameter, Bulk_ESS ## and Tail_ESS are effective sample size measures, and Rhat is the potential ## scale reduction factor on split chains (at convergence, Rhat = 1). ``` --- ``` ## Population-Level Effects: ## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS ## Intercept 1.08 0.17 0.75 1.43 1.00 4499 3212 ## Relation_typeConstituent 0.69 0.25 0.20 1.17 1.00 2877 2567 ## BranchingRight 1.44 0.27 0.91 1.99 1.00 2465 2441 ``` -- <br> .f4[ $$ `\begin{aligned} \text{acc} & \sim Bernoulli(p) \\ logit(p) & = \beta_0 + \beta_1 \cdot relation + \beta_2 \cdot branch \\ \beta_0 & \sim Gaussian(\mu_0, \sigma_0) \\ \beta_1 & \sim Gaussian(\mu_1, \sigma_1) \\ \beta_2 & \sim Gaussian(\mu_2, \sigma_2) \\ \end{aligned}` $$ ] --- ``` ## Population-Level Effects: ## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS ## Intercept 1.08 0.17 0.75 1.43 1.00 4499 3212 ## Relation_typeConstituent 0.69 0.25 0.20 1.17 1.00 2877 2567 ## BranchingRight 1.44 0.27 0.91 1.99 1.00 2465 2441 ``` <br> .f4[ $$ `\begin{aligned} \text{acc} & \sim Bernoulli(p) \\ logit(p) & = \beta_0 + \beta_1 \cdot relation + \beta_2 \cdot branch \\ \beta_0 & \sim Gaussian(1.08, 0.17) \\ \beta_1 & \sim Gaussian(\mu_1, \sigma_1) \\ \beta_2 & \sim Gaussian(\mu_2, \sigma_2) \\ \end{aligned}` $$ ] --- ``` ## Population-Level Effects: ## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS ## Intercept 1.08 0.17 0.75 1.43 1.00 4499 3212 ## Relation_typeConstituent 0.69 0.25 0.20 1.17 1.00 2877 2567 ## BranchingRight 1.44 0.27 0.91 1.99 1.00 2465 2441 ``` <br> .f4[ $$ `\begin{aligned} \text{acc} & \sim Bernoulli(p) \\ logit(p) & = \beta_0 + \beta_1 \cdot relation + \beta_2 \cdot branch \\ \beta_0 & \sim Gaussian(1.08, 0.17) \\ \beta_1 & \sim Gaussian(0.69, 0.25) \\ \beta_2 & \sim Gaussian(\mu_2, \sigma_2) \\ \end{aligned}` $$ ] --- ``` ## Population-Level Effects: ## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS ## Intercept 1.08 0.17 0.75 1.43 1.00 4499 3212 ## Relation_typeConstituent 0.69 0.25 0.20 1.17 1.00 2877 2567 ## BranchingRight 1.44 0.27 0.91 1.99 1.00 2465 2441 ``` <br> .f4[ $$ `\begin{aligned} \text{acc} & \sim Bernoulli(p) \\ logit(p) & = \beta_0 + \beta_1 \cdot relation + \beta_2 \cdot branch \\ \beta_0 & \sim Gaussian(1.08, 0.17) \\ \beta_1 & \sim Gaussian(0.69, 0.25) \\ \beta_2 & \sim Gaussian(1.44, 0.27) \\ \end{aligned}` $$ ] --- layout: false layout: true ## Conditional posterior probabilities --- $$ `\begin{aligned} \text{Unrelated, Left:} & & \beta_0 &+ (\beta_1 \cdot 0) + (\beta_2 \cdot 0) &&= \beta_0 &\\ \text{Unrelated, Right:} & & \beta_0 &+ (\beta_1 \cdot 0) + (\beta_2 \cdot 1) &&= \beta_0 + \beta_2 &\\ \text{Constituent, Left:} & & \beta_0 &+ (\beta_1 \cdot 1) + (\beta_2 \cdot 0) &&= \beta_0 + \beta_1 &\\ \text{Constituent, Right:} & & \beta_0 &+ (\beta_1 \cdot 1) + (\beta_2 \cdot 1) &&= \beta_0 + \beta_1 + \beta_2 &\\ \end{aligned}` $$ <br> -- From the model: mean `\(\beta_0 = 1.08\)`, mean `\(\beta_1 = 0.69\)`, and mean `\(\beta_2 = 1.44\)`. <br> -- $$ `\begin{aligned} \text{Unrelated, Left:} & & 1.08 &+ (0.69 \cdot 0) + (1.44 \cdot 0) &&= 1.08 &= 1.08 \text{ log-odds}\\ \text{Unrelated, Right:} & & 1.08 &+ (0.69 \cdot 0) + (1.44 \cdot 1) &&= 1.08 + 1.44 &= 2.52 \text{ log-odds}\\ \text{Constituent, Left:} & & 1.08 &+ (0.69 \cdot 1) + (1.44 \cdot 0) &&= 1.08 + 0.69 &= 1.77 \text{ log-odds}\\ \text{Constituent, Right:} & & 1.08 &+ (0.69 \cdot 1) + (1.44 \cdot 1) &&= 1.08 + 0.69 + 1.44 &= 3.21 \text{ log-odds}\\ \end{aligned}` $$ ??? Ask: now, how move to probs? --- <img src="index_files/figure-html/mult-draws-dens-lo-1.png" width="60%" style="display: block; margin: auto;" /> --- <img src="index_files/figure-html/mult-draws-dens-p-1.png" width="60%" style="display: block; margin: auto;" /> --- <img src="index_files/figure-html/mult-draws-dens2-1.png" width="60%" style="display: block; margin: auto;" /> --- layout: true ## Do these estimates match the data? --- <img src="index_files/figure-html/barplot-shallow2-1.png" width="60%" style="display: block; margin: auto;" /> --- <img src="index_files/figure-html/barplot-shallow-mult-1.png" width="60%" style="display: block; margin: auto;" /> ??? The model assumes there's a difference in Branching = Right where there doesn't seem to be one. And because that difference is small, it's also underestimating the larger difference in Branching = Left. If only there were some way to tell the model that the effect of relation type can be different between the two levels of branching. Oh wait... --- layout: false layout: true ## How does the current model fall short? --- .bg-washed-blue.b--dark-blue.ba.bw2.br3.shadow-5.ph4.mt2[ The question we want to answer: - **"Is the effect of `Relation_type` different when `Branching == Left` compared to when `Branching == Right`?"** ] -- .bg-washed-blue.b--dark-blue.ba.bw2.br3.shadow-5.ph4.mt2[ Our current model doesn't let effects differ between levels of other variables. ] -- .bg-washed-green.b--dark-green.ba.bw2.br3.shadow-5.ph4.mt2[ The solution: Include an **interaction** between these two predictors. ] --- In other words: we want a **difference of differences** <img src="index_files/figure-html/barplot-shallow3-1.png" width="60%" style="display: block; margin: auto;" /> --- In other words: we want a **difference of differences** <img src="index_files/figure-html/shallow-diffs-1.png" width="60%" style="display: block; margin: auto;" /> --- layout: false layout: true ## Including an interaction --- .bg-washed-blue.b--dark-blue.ba.bw2.br3.shadow-5.ph4.mt2[ - Add an **interaction term** in the model. - The interaction term gets its own beta coefficient. - The estimate of the interaction beta coefficient tells us **how much one predictor's effect changes between levels of the other**. ] --- `$$logit(p) = \beta_0 + (\beta_1 \cdot relation) + (\beta_2 \cdot branch) + (\beta_3 \cdot relation \cdot branch)$$` | When: | Then coded as: | | --- | --- | | <table> <thead> <tr> <th>Relation_type</th> <th>Branching</th> </tr> </thead> <tbody> <tr> <td>Unrelated</td> <td>Left</td> </tr> <tr> <td>Unrelated</td> <td>Right</td> </tr> <tr> <td>Constituent</td> <td>Left</td> </tr> <tr> <td>Constituent</td> <td>Right</td> </tr> </tbody> </table> | <table> <thead> <tr> <th>Relation_type</th> <th>Branching</th> <th>Relation_type:Branching</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>0</td> <td>0 \* 0 = 0</td> </tr> <tr> <td>0</td> <td>1</td> <td>0 \* 1 = 0</td> </tr> <tr> <td>1</td> <td>0</td> <td>1 \* 0 = 0</td> </tr> <tr> <td>1</td> <td>1</td> <td>1 \* 1 = 1</td> </tr> </tbody> </table> | -- $$ `\begin{aligned} \text{Unrelated, Left:} && \beta_0 &+ (\beta_1 \cdot 0) + (\beta_2 \cdot 0) + (\beta_3 \cdot 0) &&= \beta_0 &\\ \text{Unrelated, Right:} && \beta_0 &+ (\beta_1 \cdot 0) + (\beta_2 \cdot 1) + (\beta_3 \cdot 0) &&= \beta_0 + \beta_2 &\\ \text{Constituent, Left:} && \beta_0 &+ (\beta_1 \cdot 1) + (\beta_2 \cdot 0) + (\beta_3 \cdot 0) &&= \beta_0 + \beta_1 &\\ \text{Constituent, Right:} && \beta_0 &+ (\beta_1 \cdot 1) + (\beta_2 \cdot 1) + (\beta_3 \cdot 1) &&= \beta_0 + \beta_1 + \beta_2 + \beta_3 &\\ \end{aligned}` $$ --- .bg-washed-blue.b--dark-blue.ba.bw2.br3.shadow-5.ph4.mt2[ With both variables at their non-reference level (coded with 1) in a model **with no interaction:** $$ \beta_0 + (\beta_1 \cdot 1) + (\beta_2 \cdot 1) = \beta_0 + \beta_1 + \beta_2 \\ $$ ] .bg-washed-blue.b--dark-blue.ba.bw2.br3.shadow-5.ph4.mt2[ And in a model **with an interaction:** $$ \beta_0 + (\beta_1 \cdot 1) + (\beta_2 \cdot 1) + (\beta_3 \cdot 1 \cdot 1) = \beta_0 + \beta_1 + \beta_2 + \beta_3 \\ $$ ] -- .bg-washed-green.b--dark-green.ba.bw2.br3.shadow-5.ph4.mt2[ `\(\beta_3\)` **adjusts** the value of `\(\beta_1\)` and `\(\beta_2\)`. In other words, it **modulates the effect** of one predictor depending on the levels of the other. ] --- ```r acc_inter_bm <- brm( Accuracy ~ Relation_type + Branching + Relation_type:Branching, family = bernoulli(), data = shallow, backend = 'cmdstanr', file = 'data/cache/acc_inter_bm' ) ``` .bg-washed-blue.b--dark-blue.ba.bw2.br3.shadow-5.ph4.mt2[ A note on the syntax for specifying interactions in R: **`Relation_type + Branching + Relation_type:Branching`** is the same as **`Relation_type * Branching`** but more transparent about what's happening behind the scenes! ] --- $$ `\begin{aligned} \text{acc} & \sim Bernoulli(p) \\ logit(p) & = \beta_0 + (\beta_1 \cdot relation) + (\beta_2 \cdot branch) + (\beta_3 \cdot relation \cdot branch)\\ \beta_0 & \sim Gaussian(\mu_0, \sigma_0) \\ \beta_1 & \sim Gaussian(\mu_1, \sigma_1) \\ \beta_2 & \sim Gaussian(\mu_2, \sigma_2) \\ \beta_3 & \sim Gaussian(\mu_3, \sigma_3) \\ \end{aligned}` $$ --- .f4[ ```r summary(acc_inter_bm) ``` ``` ## Family: bernoulli ## Links: mu = logit ## Formula: Accuracy ~ Relation_type + Branching + Relation_type:Branching ## Data: shallow (Number of observations: 692) ## Draws: 4 chains, each with iter = 2000; warmup = 1000; thin = 1; ## total post-warmup draws = 4000 ## ## Population-Level Effects: ## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS ## Intercept 1.02 0.17 0.69 1.38 1.00 3445 3109 ## Relation_typeConstituent 0.85 0.29 0.28 1.43 1.00 2527 2141 ## BranchingRight 1.69 0.36 1.04 2.42 1.00 2131 2325 ## Relation_typeConstituent:BranchingRight -0.63 0.55 -1.72 0.42 1.00 1860 2203 ## ## Draws were sampled using sample(hmc). For each parameter, Bulk_ESS ## and Tail_ESS are effective sample size measures, and Rhat is the potential ## scale reduction factor on split chains (at convergence, Rhat = 1). ``` ] --- ``` ## Population-Level Effects: ## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS ## Intercept 1.02 0.17 0.69 1.38 1.00 3445 3109 ## Relation_typeConstituent 0.85 0.29 0.28 1.43 1.00 2527 2141 ## BranchingRight 1.69 0.36 1.04 2.42 1.00 2131 2325 ## Relation_typeConstituent:BranchingRight -0.63 0.55 -1.72 0.42 1.00 1860 2203 ``` <br> $$ `\begin{aligned} \text{acc} & \sim Bernoulli(p) \\ logit(p) & = \beta_0 + (\beta_1 \cdot relation) + (\beta_2 \cdot branch) + (\beta_3 \cdot relation \cdot branch)\\ \beta_0 & \sim Gaussian(1.02, 0.17) \\ \beta_1 & \sim Gaussian(0.85, 0.29) \\ \beta_2 & \sim Gaussian(1.69, 0.36) \\ \beta_3 & \sim Gaussian(-0.63, 0.55) \\ \end{aligned}` $$ --- layout: false layout: true ## Interpreting an interaction --- .bg-washed-blue.b--dark-blue.ba.bw2.br3.shadow-5.ph4.mt2[ **Interpretation 1:** - `\(\beta_3\)`'s mean of `\(-0.63\)` indicates an **average _negative_ adjustment to the effect of `Relation_type`** when we go from [branching = left] (the reference level) to [branching = right]. - In other words, the model suggests that the effect of `Relation_type` is **on average smaller** in [branching = right] than in [branching = left] (the reference level). - **However**, the 95% CrI `\([-1.72, 0.42]\)` covers both negative and positive values, thus suggesting uncertainty about the sign and magnitude of the interaction. ] -- .bg-washed-blue.b--dark-blue.ba.bw2.br3.shadow-5.ph4.mt2[ **Equivalently, interpretation 2:** - `\(\beta_3\)`'s mean indicates an **average _negative_ adjustment to the effect of `Branching`** when we go from [relation_type = unrelated] (reference level) to [relation_type = constituent]. - Or: the model suggests that the effect of `Branching` is **on average smaller** in [relation_type = constituent] than in [relation_type = unrelated] (reference level). But the CrI suggests uncertainty. ] --- .pull-left[ <img src="index_files/figure-html/barplot-symm1-1.png" width="100%" style="display: block; margin: auto;" /> ] .pull-right[ <img src="index_files/figure-html/barplot-symm2-1.png" width="100%" style="display: block; margin: auto;" /> ] --- layout: false layout: true ## Conditional posterior probabilities --- **Mean conditional posterior probabilities (log-odds)** $$ `\begin{aligned} \text{Unrelated, Left:} && \beta_0 &+ (\beta_1 \cdot 0) + (\beta_2 \cdot 0) + (\beta_3 \cdot 0) &&= \beta_0 &\\ \text{Unrelated, Right:} && \beta_0 &+ (\beta_1 \cdot 0) + (\beta_2 \cdot 1) + (\beta_3 \cdot 0) &&= \beta_0 + \beta_2 &\\ \text{Constituent, Left:} && \beta_0 &+ (\beta_1 \cdot 1) + (\beta_2 \cdot 0) + (\beta_3 \cdot 0) &&= \beta_0 + \beta_1 &\\ \text{Constituent, Right:} && \beta_0 &+ (\beta_1 \cdot 1) + (\beta_2 \cdot 1) + (\beta_3 \cdot 1) &&= \beta_0 + \beta_1 + \beta_2 + \beta_3 &\\ \end{aligned}` $$ -- <br> From the model, the means are: `\(\beta_0 = 1.02\)`, `\(\beta_1 = 0.85\)`, `\(\beta_2 = 1.69\)`, and `\(\beta_3 = -0.63\)`. $$ `\begin{aligned} \text{Unr., L:} && 1.02 &+ (0.85 \cdot 0) + (1.69 \cdot 0) + (-0.63 \cdot 0) &&= 1.02 &= 1.02 \text{ log-odds}\\ \text{Unr., R:} && 1.02 &+ (0.85 \cdot 0) + (1.69 \cdot 1) + (-0.63 \cdot 0) &&= 1.02 + 1.69 &= 2.71 \text{ log-odds}\\ \text{Con., L:} && 1.02 &+ (0.85 \cdot 1) + (1.69 \cdot 0) + (-0.63 \cdot 0) &&= 1.02 + 0.85 &= 1.87 \text{ log-odds}\\ \text{Con., R:} && 1.02 &+ (0.85 \cdot 1) + (1.69 \cdot 1) + (-0.63 \cdot 1) &&= 1.02 + 0.85 + 1.69 -0.63 &= 2.93 \text{ log-odds}\\ \end{aligned}` $$ --- .pull-left[ <img src="index_files/figure-html/inter-draws-dens-1-1.png" width="100%" style="display: block; margin: auto;" /> ] .pull-right[ <img src="index_files/figure-html/inter-draws-dens2-1.png" width="100%" style="display: block; margin: auto;" /> ] --- **Log-odds of "correct" response** .pull-left[ <img src="index_files/figure-html/mult-draws-dens-L-lo-1.png" width="100%" style="display: block; margin: auto;" /> ] .pull-right[ <img src="index_files/figure-html/inter-draws-dens-R-lo-1.png" width="100%" style="display: block; margin: auto;" /> ] --- **Probability of "correct" response** .pull-left[ <img src="index_files/figure-html/mult-draws-dens-L-1.png" width="100%" style="display: block; margin: auto;" /> ] .pull-right[ <img src="index_files/figure-html/inter-draws-dens-R-1.png" width="100%" style="display: block; margin: auto;" /> ] --- .pull-left[ <img src="index_files/figure-html/mult-barplot-L-1.png" width="100%" style="display: block; margin: auto;" /> ] .pull-right[ <img src="index_files/figure-html/inter-barplot-R-1.png" width="100%" style="display: block; margin: auto;" /> ] --- layout: false ## Reporting > We fitted a Bayesian linear model with response accuracy as the outcome variable, using a Bernoulli distribution as the distribution family of the outcome. We included the following predictors: prime relation type (unrelated vs constituent), branching (left vs right), and an interaction between the two. The predictors were coded using the default treatment contrasts and the reference level was set to the first level as indicated here. > > According to the model, we can be 95% confident that the probability of obtaining a correct response is between 67 and 80% when the relation type is unrelated and the word pair is left-branching ($\beta$ = 1.02, SD = 0.17, 95% CrI [0.69, 1.38]). When the relation type is unrelated and the word pair is right-branching, the probability of a correct response is between 90 and 97%, at 95% confidence ($\beta$ = 1.69, SD = 0.36, 95% CrI [1.04, 2.42]). Turning to the constituent relation type, the model suggests a probability of a correct response between 81 and 91% ($\beta$ = 0.85, SD = 0.29, 95% CrI [0.28, 1.43]). When the relation type is constituent and the word pair is right-branching, we can be 95% confident that the probability of a correct response is between 91 and 97% ($\beta$ = -0.63, SD = 0.55, 95% CrI [-1.72, 0.42]). > > As suggested by the 95% CrI of the interaction term (in log-odds [-1.72, 0.42]), there is quite a lot of uncertainty as to whether the difference in probability of correct response in unrelated vs constituent in right-branching pairs differs from that in left-branching pairs, since the interval covers both negative and positive values. Moreover, the conditional posterior probabilities of unrelated and right-branching on the one hand and constituent and right branching on the other are very similar, as can be seen in the plot above (and as suggested by the respective 95% CrIs: 90-97% vs 91-97% respectively). --- ## Summary .bg-washed-blue.b--dark-blue.ba.bw2.br3.shadow-5.ph4.mt2[ - The factorial design of a study is a tabular representation of the combination of variables and levels employed in a study. - We can fit a model that contains an **interaction term** between multiple predictors when we want to allow the effect of one predictor to possibly differ depending on the levels of the other predictors. It is a good idea to always include interactions. - The interaction term's `\(\beta\)` tells us **how much one predictor's effect changes between the reference and non-reference levels of the other**. ]