Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor updates: improve docstring to correlated MCMC and remove some deprecated parameters #108

Merged
merged 2 commits into from
May 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
10 changes: 5 additions & 5 deletions abcpy/NN_utilities/utilities.py
Original file line number Diff line number Diff line change
Expand Up @@ -79,9 +79,9 @@ def jacobian(input, output, diffable=True):
g = torch.zeros(output.shape).to(input)
g[:, i] = 1
if diffable:
J[:, i] = torch.autograd.grad(output, input, g, only_inputs=True, retain_graph=True, create_graph=True)[0]
J[:, i] = torch.autograd.grad(output, input, g, retain_graph=True, create_graph=True)[0]
else:
J[:, i] = torch.autograd.grad(output, input, g, only_inputs=True, retain_graph=True)[0]
J[:, i] = torch.autograd.grad(output, input, g, retain_graph=True)[0]
J = J.reshape(output.shape[0], output.shape[1], in_size)
return J.transpose(2, 1)

Expand All @@ -107,18 +107,18 @@ def jacobian_second_order(input, output, diffable=True):
for i in range(output.shape[1]):
g = torch.zeros(output.shape).to(input)
g[:, i] = 1
J[:, i] = torch.autograd.grad(output, input, g, only_inputs=True, retain_graph=True, create_graph=True)[0]
J[:, i] = torch.autograd.grad(output, input, g, retain_graph=True, create_graph=True)[0]
J = J.reshape(output.shape[0], output.shape[1], in_size)

for i in range(output.shape[1]):
for j in range(input.shape[1]):
g = torch.zeros(J.shape).to(input)
g[:, i, j] = 1
if diffable:
J2[:, i, j] = torch.autograd.grad(J, input, g, only_inputs=True, retain_graph=True, create_graph=True)[
J2[:, i, j] = torch.autograd.grad(J, input, g, retain_graph=True, create_graph=True)[
0][:, j]
else:
J2[:, i, j] = torch.autograd.grad(J, input, g, only_inputs=True, retain_graph=True)[0][:, j]
J2[:, i, j] = torch.autograd.grad(J, input, g, retain_graph=True)[0][:, j]

J2 = J2.reshape(output.shape[0], output.shape[1], in_size)

Expand Down
13 changes: 11 additions & 2 deletions abcpy/inferences.py
Original file line number Diff line number Diff line change
Expand Up @@ -4072,9 +4072,11 @@ def _compute_accepted_cov_mats(self, covFactor, new_cov_mats):

class MCMCMetropoliHastings(BaseLikelihood, InferenceMethod):
"""
Simple Metropolis-Hastings MCMC working with the approximate likelihood functions Approx_likelihood, with
Metropolis-Hastings MCMC working with the approximate likelihood functions Approx_likelihood, with
multivariate normal proposals.

As the Approximate Likelihood is estimated from simulations from the data, this is a pseudo-marginal MCMC approach.

Parameters
----------
root_models : list
Expand Down Expand Up @@ -4150,6 +4152,12 @@ def sample(self, observations, n_samples, n_samples_per_param=100, burnin=1000,
use MCMC with transformed space, you need to specify lower and upper bounds in the corresponding parameters (see
details in the description of `bounds`).

As the Approximate Likelihood is estimated from simulations from the data, this is a pseudo-marginal MCMC
approach. As pseudo-marginal can lead to sticky ``chains'', this method also implements correlated
pseudo-marginal MCMC [2], where the noise used to estimate the target at subsequent steps is correlated.
See the argument `n_groups_correlated_randomness` to learn more.


The returned journal file contains also information on acceptance rates (in the configuration dictionary).

[1] Haario, H., Saksman, E., & Tamminen, J. (2001). An adaptive Metropolis algorithm. Bernoulli, 7(2), 223-242.
Expand Down Expand Up @@ -4223,7 +4231,8 @@ def sample(self, observations, n_samples, n_samples_per_param=100, burnin=1000,
and was discussed in [2] for the Bayesian Synthetic Likelihood framework. Notice that, when
n_groups_correlated_randomness > 0 and speedup_dummy is True, you obtain different results for different
values of n_groups_correlated_randomness due to different ways of handling random seeds.
When None, we do not keep track of the random seeds. Default value is None.
When None, we do not keep track of the random seeds (i.e. it is a vanilla pseudo-marginal); the same is true
when n_groups_correlated_randomness=1. Default value is None.
use_tqdm : boolean, optional
Whether using tqdm or not to display progress. Defaults to True.
journal_file: str, optional
Expand Down