Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Report text duplicated many times for brms model using in a tidymodels workflow #418

Open
JamesHWade opened this issue Mar 25, 2024 · 0 comments

Comments

@JamesHWade
Copy link

Describe the bug

The report text for report(fit) is repeated several times for {brms} models.

To Reproduce

Here is a reprex:

I'm using a tidymodels workflow in this reprex, it appears on brms models fit outside of tidymodels, too.

library(tidymodels)
library(bayesian)
#> Loading required package: brms
#> Loading required package: Rcpp
#> 
#> Attaching package: 'Rcpp'
#> The following object is masked from 'package:rsample':
#> 
#>     populate
#> Loading 'brms' package (version 2.21.0). Useful instructions
#> can be found by typing help('brms'). A more detailed introduction
#> to the package is available through vignette('brms_overview').
#> 
#> Attaching package: 'brms'
#> The following object is masked from 'package:dials':
#> 
#>     mixture
#> The following object is masked from 'package:stats':
#> 
#>     ar
library(brms)
library(easystats)
#> # Attaching packages: easystats 0.7.0.3
#> ✔ bayestestR  0.13.2     ✔ correlation 0.8.4.2 
#> ✔ datawizard  0.9.1.8    ✔ effectsize  0.8.6.6 
#> ✔ insight     0.19.10    ✔ modelbased  0.8.7   
#> ✔ performance 0.11.0     ✔ parameters  0.21.6.1
#> ✔ report      0.5.8.1    ✔ see         0.8.3.1
rec <- recipe(mpg ~ wt + cyl + drat, data = mtcars) |> 
  step_scale(all_predictors()) |> 
  step_center(all_predictors())

mod <- bayesian() |> 
  set_engine("brms")

wflow <- workflow() |> 
  add_recipe(rec) |> 
  add_model(mod) |>
  fit(data = mtcars)
#> Compiling Stan program...
#> Trying to compile a simple C file
#> Running /opt/homebrew/Cellar/r/4.3.3/lib/R/bin/R CMD SHLIB foo.c
#> using C compiler: ‘Apple clang version 15.0.0 (clang-1500.1.0.2.5)’
#> using SDK: ‘MacOSX14.2.sdk’
#> clang -I"/opt/homebrew/Cellar/r/4.3.3/lib/R/include" -DNDEBUG   -I"/opt/homebrew/lib/R/4.3/site-library/Rcpp/include/"  -I"/opt/homebrew/lib/R/4.3/site-library/RcppEigen/include/"  -I"/opt/homebrew/lib/R/4.3/site-library/RcppEigen/include/unsupported"  -I"/opt/homebrew/lib/R/4.3/site-library/BH/include" -I"/opt/homebrew/lib/R/4.3/site-library/StanHeaders/include/src/"  -I"/opt/homebrew/lib/R/4.3/site-library/StanHeaders/include/"  -I"/opt/homebrew/lib/R/4.3/site-library/RcppParallel/include/"  -I"/opt/homebrew/lib/R/4.3/site-library/rstan/include" -DEIGEN_NO_DEBUG  -DBOOST_DISABLE_ASSERTS  -DBOOST_PENDING_INTEGER_LOG2_HPP  -DSTAN_THREADS  -DUSE_STANC3 -DSTRICT_R_HEADERS  -DBOOST_PHOENIX_NO_VARIADIC_EXPRESSION  -D_HAS_AUTO_PTR_ETC=0  -include '/opt/homebrew/lib/R/4.3/site-library/StanHeaders/include/stan/math/prim/fun/Eigen.hpp'  -D_REENTRANT -DRCPP_PARALLEL_USE_TBB=1   -I/opt/homebrew/opt/gettext/include -I/opt/homebrew/opt/readline/include -I/opt/homebrew/opt/xz/include -I/opt/homebrew/include    -fPIC  -g -O2  -c foo.c -o foo.o
#> In file included from <built-in>:1:
#> In file included from /opt/homebrew/lib/R/4.3/site-library/StanHeaders/include/stan/math/prim/fun/Eigen.hpp:22:
#> In file included from /opt/homebrew/lib/R/4.3/site-library/RcppEigen/include/Eigen/Dense:1:
#> In file included from /opt/homebrew/lib/R/4.3/site-library/RcppEigen/include/Eigen/Core:19:
#> /opt/homebrew/lib/R/4.3/site-library/RcppEigen/include/Eigen/src/Core/util/Macros.h:679:10: fatal error: 'cmath' file not found
#> #include <cmath>
#>          ^~~~~~~
#> 1 error generated.
#> make: *** [foo.o] Error 1
#> Start sampling
#> 
#> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
#> Chain 1: 
#> Chain 1: Gradient evaluation took 2.2e-05 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.22 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1: 
#> Chain 1: 
#> Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 1: 
#> Chain 1:  Elapsed Time: 0.018 seconds (Warm-up)
#> Chain 1:                0.017 seconds (Sampling)
#> Chain 1:                0.035 seconds (Total)
#> Chain 1: 
#> 
#> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
#> Chain 2: 
#> Chain 2: Gradient evaluation took 1e-06 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.01 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2: 
#> Chain 2: 
#> Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 2: 
#> Chain 2:  Elapsed Time: 0.017 seconds (Warm-up)
#> Chain 2:                0.016 seconds (Sampling)
#> Chain 2:                0.033 seconds (Total)
#> Chain 2: 
#> 
#> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
#> Chain 3: 
#> Chain 3: Gradient evaluation took 1e-06 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.01 seconds.
#> Chain 3: Adjust your expectations accordingly!
#> Chain 3: 
#> Chain 3: 
#> Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 3: 
#> Chain 3:  Elapsed Time: 0.019 seconds (Warm-up)
#> Chain 3:                0.016 seconds (Sampling)
#> Chain 3:                0.035 seconds (Total)
#> Chain 3: 
#> 
#> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
#> Chain 4: 
#> Chain 4: Gradient evaluation took 1e-06 seconds
#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.01 seconds.
#> Chain 4: Adjust your expectations accordingly!
#> Chain 4: 
#> Chain 4: 
#> Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 4: 
#> Chain 4:  Elapsed Time: 0.018 seconds (Warm-up)
#> Chain 4:                0.016 seconds (Sampling)
#> Chain 4:                0.034 seconds (Total)
#> Chain 4:

mt_fit <- wflow |> extract_fit_engine()

report(mt_fit)
#> Response residuals not available to calculate mean square error. (R)MSE
#>   is probably not reliable.
#> Start sampling
#> 
#> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 1).
#> Chain 1: 
#> Chain 1: Gradient evaluation took 4e-06 seconds
#> Chain 1: 1000 transitions using 10 leapfrog steps per transition would take 0.04 seconds.
#> Chain 1: Adjust your expectations accordingly!
#> Chain 1: 
#> Chain 1: 
#> Chain 1: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 1: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 1: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 1: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 1: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 1: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 1: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 1: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 1: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 1: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 1: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 1: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 1: 
#> Chain 1:  Elapsed Time: 0.015 seconds (Warm-up)
#> Chain 1:                0.014 seconds (Sampling)
#> Chain 1:                0.029 seconds (Total)
#> Chain 1: 
#> 
#> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 2).
#> Chain 2: 
#> Chain 2: Gradient evaluation took 1e-06 seconds
#> Chain 2: 1000 transitions using 10 leapfrog steps per transition would take 0.01 seconds.
#> Chain 2: Adjust your expectations accordingly!
#> Chain 2: 
#> Chain 2: 
#> Chain 2: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 2: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 2: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 2: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 2: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 2: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 2: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 2: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 2: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 2: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 2: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 2: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 2: 
#> Chain 2:  Elapsed Time: 0.015 seconds (Warm-up)
#> Chain 2:                0.016 seconds (Sampling)
#> Chain 2:                0.031 seconds (Total)
#> Chain 2: 
#> 
#> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 3).
#> Chain 3: 
#> Chain 3: Gradient evaluation took 1e-06 seconds
#> Chain 3: 1000 transitions using 10 leapfrog steps per transition would take 0.01 seconds.
#> Chain 3: Adjust your expectations accordingly!
#> Chain 3: 
#> Chain 3: 
#> Chain 3: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 3: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 3: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 3: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 3: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 3: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 3: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 3: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 3: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 3: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 3: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 3: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 3: 
#> Chain 3:  Elapsed Time: 0.015 seconds (Warm-up)
#> Chain 3:                0.015 seconds (Sampling)
#> Chain 3:                0.03 seconds (Total)
#> Chain 3: 
#> 
#> SAMPLING FOR MODEL 'anon_model' NOW (CHAIN 4).
#> Chain 4: 
#> Chain 4: Gradient evaluation took 1e-06 seconds
#> Chain 4: 1000 transitions using 10 leapfrog steps per transition would take 0.01 seconds.
#> Chain 4: Adjust your expectations accordingly!
#> Chain 4: 
#> Chain 4: 
#> Chain 4: Iteration:    1 / 2000 [  0%]  (Warmup)
#> Chain 4: Iteration:  200 / 2000 [ 10%]  (Warmup)
#> Chain 4: Iteration:  400 / 2000 [ 20%]  (Warmup)
#> Chain 4: Iteration:  600 / 2000 [ 30%]  (Warmup)
#> Chain 4: Iteration:  800 / 2000 [ 40%]  (Warmup)
#> Chain 4: Iteration: 1000 / 2000 [ 50%]  (Warmup)
#> Chain 4: Iteration: 1001 / 2000 [ 50%]  (Sampling)
#> Chain 4: Iteration: 1200 / 2000 [ 60%]  (Sampling)
#> Chain 4: Iteration: 1400 / 2000 [ 70%]  (Sampling)
#> Chain 4: Iteration: 1600 / 2000 [ 80%]  (Sampling)
#> Chain 4: Iteration: 1800 / 2000 [ 90%]  (Sampling)
#> Chain 4: Iteration: 2000 / 2000 [100%]  (Sampling)
#> Chain 4: 
#> Chain 4:  Elapsed Time: 0.016 seconds (Warm-up)
#> Chain 4:                0.016 seconds (Sampling)
#> Chain 4:                0.032 seconds (Total)
#> Chain 4:
#> Response residuals not available to calculate mean square error. (R)MSE
#>   is probably not reliable.
#> We fitted a Bayesian linear model (estimated using MCMC sampling with 4 chains
#> of 2000 iterations and a warmup of 1000) to predict ..y with wt, cyl and drat
#> (formula: ..y ~ wt + cyl + drat). Priors over parameters were set as student_t
#> (location = 19.20, scale = 5.40) distributions. The model's explanatory power
#> is substantial (R2 = 0.82, 95% CI [0.76, 0.85], adj. R2 = 0.79).  Within this
#> model:
#> 
#>   - The effect of b Intercept (Median = 20.07, 95% CI [19.10, 20.98]) has a
#> 100.00% probability of being positive (> 0), 100.00% of being significant (>
#> 0.30), and 100.00% of being large (> 1.81). The estimation successfully
#> converged (Rhat = 1.001) and the indices are reliable (ESS = 2523)
#>   - The effect of b wt (Median = -3.10, 95% CI [-4.78, -1.51]) has a 99.98%
#> probability of being negative (< 0), 99.98% of being significant (< -0.30), and
#> 94.27% of being large (< -1.81). The estimation successfully converged (Rhat =
#> 1.000) and the indices are reliable (ESS = 3146)
#>   - The effect of b cyl (Median = -2.72, 95% CI [-4.27, -1.04]) has a 99.88%
#> probability of being negative (< 0), 99.80% of being significant (< -0.30), and
#> 85.78% of being large (< -1.81). The estimation successfully converged (Rhat =
#> 1.001) and the indices are reliable (ESS = 3801)
#>   - The effect of b drat (Median = 0.02, 95% CI [-1.42, 1.40]) has a 51.20%
#> probability of being positive (> 0), 34.17% of being significant (> 0.30), and
#> 0.75% of being large (> 1.81). The estimation successfully converged (Rhat =
#> 1.001) and the indices are reliable (ESS = 2431)
#> 
#> Following the Sequential Effect eXistence and sIgnificance Testing (SEXIT)
#> framework, we report the median of the posterior distribution and its 95% CI
#> (Highest Density Interval), along the probability of direction (pd), the
#> probability of significance and the probability of being large. The thresholds
#> beyond which the effect is considered as significant (i.e., non-negligible) and
#> large are |0.30| and |1.81| (corresponding respectively to 0.05 and 0.30 of the
#> outcome's SD). Convergence and stability of the Bayesian sampling has been
#> assessed using R-hat, which should be below 1.01 (Vehtari et al., 2019), and
#> Effective Sample Size (ESS), which should be greater than 1000 (Burkner,
#> 2017)., We fitted a Bayesian linear model (estimated using MCMC sampling with 4
#> chains of 2000 iterations and a warmup of 1000) to predict ..y with wt, cyl and
#> drat (formula: ..y ~ wt + cyl + drat). Priors over parameters were set as
#> uniform (location = , scale = ) distributions. The model's explanatory power is
#> substantial (R2 = 0.82, 95% CI [0.76, 0.85], adj. R2 = 0.79).  Within this
#> model:
#> 
#>   - The effect of b Intercept (Median = 20.07, 95% CI [19.10, 20.98]) has a
#> 100.00% probability of being positive (> 0), 100.00% of being significant (>
#> 0.30), and 100.00% of being large (> 1.81). The estimation successfully
#> converged (Rhat = 1.001) and the indices are reliable (ESS = 2523)
#>   - The effect of b wt (Median = -3.10, 95% CI [-4.78, -1.51]) has a 99.98%
#> probability of being negative (< 0), 99.98% of being significant (< -0.30), and
#> 94.27% of being large (< -1.81). The estimation successfully converged (Rhat =
#> 1.000) and the indices are reliable (ESS = 3146)
#>   - The effect of b cyl (Median = -2.72, 95% CI [-4.27, -1.04]) has a 99.88%
#> probability of being negative (< 0), 99.80% of being significant (< -0.30), and
#> 85.78% of being large (< -1.81). The estimation successfully converged (Rhat =
#> 1.001) and the indices are reliable (ESS = 3801)
#>   - The effect of b drat (Median = 0.02, 95% CI [-1.42, 1.40]) has a 51.20%
#> probability of being positive (> 0), 34.17% of being significant (> 0.30), and
#> 0.75% of being large (> 1.81). The estimation successfully converged (Rhat =
#> 1.001) and the indices are reliable (ESS = 2431)
#> 
#> Following the Sequential Effect eXistence and sIgnificance Testing (SEXIT)
#> framework, we report the median of the posterior distribution and its 95% CI
#> (Highest Density Interval), along the probability of direction (pd), the
#> probability of significance and the probability of being large. The thresholds
#> beyond which the effect is considered as significant (i.e., non-negligible) and
#> large are |0.30| and |1.81| (corresponding respectively to 0.05 and 0.30 of the
#> outcome's SD). Convergence and stability of the Bayesian sampling has been
#> assessed using R-hat, which should be below 1.01 (Vehtari et al., 2019), and
#> Effective Sample Size (ESS), which should be greater than 1000 (Burkner,
#> 2017)., We fitted a Bayesian linear model (estimated using MCMC sampling with 4
#> chains of 2000 iterations and a warmup of 1000) to predict ..y with wt, cyl and
#> drat (formula: ..y ~ wt + cyl + drat). Priors over parameters were set as
#> uniform (location = , scale = ) distributions. The model's explanatory power is
#> substantial (R2 = 0.82, 95% CI [0.76, 0.85], adj. R2 = 0.79).  Within this
#> model:
#> 
#>   - The effect of b Intercept (Median = 20.07, 95% CI [19.10, 20.98]) has a
#> 100.00% probability of being positive (> 0), 100.00% of being significant (>
#> 0.30), and 100.00% of being large (> 1.81). The estimation successfully
#> converged (Rhat = 1.001) and the indices are reliable (ESS = 2523)
#>   - The effect of b wt (Median = -3.10, 95% CI [-4.78, -1.51]) has a 99.98%
#> probability of being negative (< 0), 99.98% of being significant (< -0.30), and
#> 94.27% of being large (< -1.81). The estimation successfully converged (Rhat =
#> 1.000) and the indices are reliable (ESS = 3146)
#>   - The effect of b cyl (Median = -2.72, 95% CI [-4.27, -1.04]) has a 99.88%
#> probability of being negative (< 0), 99.80% of being significant (< -0.30), and
#> 85.78% of being large (< -1.81). The estimation successfully converged (Rhat =
#> 1.001) and the indices are reliable (ESS = 3801)
#>   - The effect of b drat (Median = 0.02, 95% CI [-1.42, 1.40]) has a 51.20%
#> probability of being positive (> 0), 34.17% of being significant (> 0.30), and
#> 0.75% of being large (> 1.81). The estimation successfully converged (Rhat =
#> 1.001) and the indices are reliable (ESS = 2431)
#> 
#> Following the Sequential Effect eXistence and sIgnificance Testing (SEXIT)
#> framework, we report the median of the posterior distribution and its 95% CI
#> (Highest Density Interval), along the probability of direction (pd), the
#> probability of significance and the probability of being large. The thresholds
#> beyond which the effect is considered as significant (i.e., non-negligible) and
#> large are |0.30| and |1.81| (corresponding respectively to 0.05 and 0.30 of the
#> outcome's SD). Convergence and stability of the Bayesian sampling has been
#> assessed using R-hat, which should be below 1.01 (Vehtari et al., 2019), and
#> Effective Sample Size (ESS), which should be greater than 1000 (Burkner,
#> 2017)., We fitted a Bayesian linear model (estimated using MCMC sampling with 4
#> chains of 2000 iterations and a warmup of 1000) to predict ..y with wt, cyl and
#> drat (formula: ..y ~ wt + cyl + drat). Priors over parameters were set as
#> uniform (location = , scale = ) distributions. The model's explanatory power is
#> substantial (R2 = 0.82, 95% CI [0.76, 0.85], adj. R2 = 0.79).  Within this
#> model:
#> 
#>   - The effect of b Intercept (Median = 20.07, 95% CI [19.10, 20.98]) has a
#> 100.00% probability of being positive (> 0), 100.00% of being significant (>
#> 0.30), and 100.00% of being large (> 1.81). The estimation successfully
#> converged (Rhat = 1.001) and the indices are reliable (ESS = 2523)
#>   - The effect of b wt (Median = -3.10, 95% CI [-4.78, -1.51]) has a 99.98%
#> probability of being negative (< 0), 99.98% of being significant (< -0.30), and
#> 94.27% of being large (< -1.81). The estimation successfully converged (Rhat =
#> 1.000) and the indices are reliable (ESS = 3146)
#>   - The effect of b cyl (Median = -2.72, 95% CI [-4.27, -1.04]) has a 99.88%
#> probability of being negative (< 0), 99.80% of being significant (< -0.30), and
#> 85.78% of being large (< -1.81). The estimation successfully converged (Rhat =
#> 1.001) and the indices are reliable (ESS = 3801)
#>   - The effect of b drat (Median = 0.02, 95% CI [-1.42, 1.40]) has a 51.20%
#> probability of being positive (> 0), 34.17% of being significant (> 0.30), and
#> 0.75% of being large (> 1.81). The estimation successfully converged (Rhat =
#> 1.001) and the indices are reliable (ESS = 2431)
#> 
#> Following the Sequential Effect eXistence and sIgnificance Testing (SEXIT)
#> framework, we report the median of the posterior distribution and its 95% CI
#> (Highest Density Interval), along the probability of direction (pd), the
#> probability of significance and the probability of being large. The thresholds
#> beyond which the effect is considered as significant (i.e., non-negligible) and
#> large are |0.30| and |1.81| (corresponding respectively to 0.05 and 0.30 of the
#> outcome's SD). Convergence and stability of the Bayesian sampling has been
#> assessed using R-hat, which should be below 1.01 (Vehtari et al., 2019), and
#> Effective Sample Size (ESS), which should be greater than 1000 (Burkner, 2017).
#> and We fitted a Bayesian linear model (estimated using MCMC sampling with 4
#> chains of 2000 iterations and a warmup of 1000) to predict ..y with wt, cyl and
#> drat (formula: ..y ~ wt + cyl + drat). Priors over parameters were set as
#> student_t (location = 0.00, scale = 5.40) distributions. The model's
#> explanatory power is substantial (R2 = 0.82, 95% CI [0.76, 0.85], adj. R2 =
#> 0.79).  Within this model:
#> 
#>   - The effect of b Intercept (Median = 20.07, 95% CI [19.10, 20.98]) has a
#> 100.00% probability of being positive (> 0), 100.00% of being significant (>
#> 0.30), and 100.00% of being large (> 1.81). The estimation successfully
#> converged (Rhat = 1.001) and the indices are reliable (ESS = 2523)
#>   - The effect of b wt (Median = -3.10, 95% CI [-4.78, -1.51]) has a 99.98%
#> probability of being negative (< 0), 99.98% of being significant (< -0.30), and
#> 94.27% of being large (< -1.81). The estimation successfully converged (Rhat =
#> 1.000) and the indices are reliable (ESS = 3146)
#>   - The effect of b cyl (Median = -2.72, 95% CI [-4.27, -1.04]) has a 99.88%
#> probability of being negative (< 0), 99.80% of being significant (< -0.30), and
#> 85.78% of being large (< -1.81). The estimation successfully converged (Rhat =
#> 1.001) and the indices are reliable (ESS = 3801)
#>   - The effect of b drat (Median = 0.02, 95% CI [-1.42, 1.40]) has a 51.20%
#> probability of being positive (> 0), 34.17% of being significant (> 0.30), and
#> 0.75% of being large (> 1.81). The estimation successfully converged (Rhat =
#> 1.001) and the indices are reliable (ESS = 2431)
#> 
#> Following the Sequential Effect eXistence and sIgnificance Testing (SEXIT)
#> framework, we report the median of the posterior distribution and its 95% CI
#> (Highest Density Interval), along the probability of direction (pd), the
#> probability of significance and the probability of being large. The thresholds
#> beyond which the effect is considered as significant (i.e., non-negligible) and
#> large are |0.30| and |1.81| (corresponding respectively to 0.05 and 0.30 of the
#> outcome's SD). Convergence and stability of the Bayesian sampling has been
#> assessed using R-hat, which should be below 1.01 (Vehtari et al., 2019), and
#> Effective Sample Size (ESS), which should be greater than 1000 (Burkner, 2017).

Created on 2024-03-24 with reprex v2.1.0

Expected behaviour

I expected the report text to only appear once.

Specifications (please complete the following information):

  • Package Version 0.5.8.1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant