This repository contains the R-package and scripts for the empirical application in the associated article.
The directory svarmawhf contains the R-package.
The file Package Structure (or the pdf version)is a good starting point to get familiar with the important functions and the call structure.
All functions are documented.
The documentation can be accessed as usual with help(<function_name>)
or ?<function_name>
.
The package builds on the R-packages RLDM and rationalmatrices, authored jointly with Wolfgang Scherrer.
Since these packages might change, the parts which are necessary for the analysis in the associated article are extracted to R files whose names start with ~/svarmawhf/R/zz_ratmat_ and ~/svarmawhf/R/zz_rldm_.
By opening the file ~/svarmawhf.Rproj, the working directory of R is set to the project root. This is necessary that all scripts run as intended and that the package compiles. In R-Studio, use the shortcuts cmd+shift+d (or devtools::document()) to generate the documentation, and cmd+shift+b (or devtools::build()) to build the package, see [http://r-pkgs.had.co.nz/].
All files which are relevant for the analysis of the empirical application are contained in the directory scriptsP_whf_revision_sgt_bq. In the following, we will describe the content of the files in this directory. Since all results are also uploaded into the folder, it is not necessary to run any script.
The analysis is separated into different steps. Each file starts with an overview of the main conclusion, i.e. it is not necessary to go through the files in detail if one is only interested in the main take-aways.
The Rmarkdown file Data Preparation (or the pdf version) loads and transforms the data of Blanchard and Quah in the same way as this is done in GMR. Moreover, visualisations of intermediary data transformation steps are shown and described.
Eventually, the data are saved and serve as input to the main script which we will describe next.
The main script performs the main work of estimating the normalised canonical WHF model for all combinations of
Eventually, the results are saved as rds-files and later extracted. For the particular steps taken, see the documentation package function create_results(). In essence, we create a data frame of all combinations of integer-valued parameters, separate the data frame for parallelisation and perform the optimisation steps for each set of integer-valued parameters.
The SLURM script is uploaded to the HPC cluster of the University of Helsinki which in turn calls the main script.
Since evaluation of our model for different integer-valued parameters is embarrassingly parallel, we use array-jobs to perform the calculations. In our case, the evaluation of all models takes about 15 minutes.
The Rmarkdown file Error Analysis analyses the convergence properties of the Nelder-Mead and BFGS optimizations for the Gaussian, Laplace, SGT densities respectively.
The main take aways are that the optimisation works well except for rather high MA orders (larger than 5).
The Rmarkdown file Model Selection generates AIC and BIC values for all integer-valued parameters and plots them.
This is a preliminary step for final model selection which is based on both AIC/BIC values and the independence properties of the residuals.
Since the residuals should be independent and non-Gaussian, we perform the Shapiro-Wilk and Jarque-Bera tests as wells as the Ljung-Box test in the Rmarkdown file Residual Checks.
Together with the AIC/BIC values, we decide on the model with integer-valued parameters
In the Rmarkdown file IRFs, we obtain the IRFs for some of the best models (with respect to AIC/BIC/Shapiro-Wilk/Jarque-Bera/Ljung-Box).
We use either estimated values for the static shock transmission matrix
In addition, we generate the IRFs of Blanchard and Quah (1989) and GMR.
In the Rmarkdown file Robust Standard Errors, we obtain different estimates for the standard deviations of the system and noise parameters. In particular, we compare estimates under the assumption of correct model specification with robust ones.
Under correct specification, the information matrix matrix may be calculated as the
- outer product gradient (OPG) of the scores
- the Hessian obtained as output for
stats::optim()
- the analytic version of the Hessian obtained from
numDeriv::hessian()
Allowing for misspecification, the robust standard errors are obtained from the covariance matrix stats::optim()
or in an analytic way from numDeriv::hessian()
.
First, we investigate the OPG and the different versions of the Hessians quantitatively and by plotting the values. Subsequently, we compare the diagonal elements of the OPG and the different versions of the Hessian. Next, we evaluate the implied standard deviations for the system and noise parameters quantitatively and with plots. Finally, we provide the results, i.e. each part of the parameter estimates together with their robust and non-robust standard errors.
In order to investigate the SGT distributions and Gaussian mixtures, we have created two dashboards in the subdirectory flexdashboards. They may also be directly accessed via the links https://funber.shinyapps.io/sgt_dashboard/ and https://funber.shinyapps.io/mixtures_dashboard/