Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Amplitude estimation of Ramsey fringe #144

Open
Justin318 opened this issue Mar 13, 2018 · 1 comment
Open

Amplitude estimation of Ramsey fringe #144

Justin318 opened this issue Mar 13, 2018 · 1 comment

Comments

@Justin318
Copy link

Hi,
I am trying to modify the tutorial of simple_precession_estimation as in
https://github.com/QInfer/qinfer-examples/blob/master/simple_precession_estimation.ipynb
to estimate the amplitude of a Ramsey fringe. The model is set up to estimate two parameters, the amplitude and offset of the sigmal (signal = a * np.cos(true_omega * ts + phase) + offset) with omega and phase as fixed parameters.

I encoutnered two problems:
a. I used the method 'are_models_valid'of Model

def are_models_valid(self, modelparams):
    """

    Args:
        modelparams:

    Returns:

    """
    a = modelparams[:, 0]
    offset = modelparams[:, 1]
    return np.logical_and(-a + offset > 0, a + offset < 1)

to limit the range of parameters, which does not seem to take any effect and the algorithm will throw errors. I end up to limit the parameter range by using a post-selected distribution as the prior

   prior =  PostselectedDistribution(
    UniformDistribution(
        [
            [0.30, 0.5],
            [0.30, 0.5]
         ]),
       model,
       maxiters=10000
    )

Not sure if this is the proper way to use the package from the point of view of package designer.

b. With this modifications I could get the algorithm running. As the I run bootstrap on the model, the inferred amplitudes show bias towards smaller amplitude if the "true" amplitude is close to the maximum value 0.5. Is that the expected behavior? I expect the bayesian method to take this kind of bias out.

The full code can be found in this notebook https://github.com/Justin318/test_qinfer_share/blob/master/simple_est_amplitude.ipynb

Thanks for your time.

@ihincks
Copy link
Collaborator

ihincks commented Mar 13, 2018

Hi Justin. I can't easily run your code because I don't know what to use as data. However, I can quickly answer your two points.

a) are_models_valid does nothing more than assign a boolean to each particle. This is used in the code during the resampling step (the only time particles are moved) to help make sure particles are not moved out of bounds with respect to the model. Your prior (as you found out through the errors) should always output particles that are in bounds---it doesn't really make sense to assert that there is a prior finite probability of something impossible being true.

I tend to avoid PostselectedDistribution since its kind of adhoc. I would favour something like the following, where the offset has uniform probability of being anywhere between 0 and 1 (change this if you like), and given an offset, the amplitude is chosen uniformly from everything that is valid. (untested)

class MyDistribution(qinfer.Distribution):
    def sample(n=1):
        offset = np.random.rand(n)
        a = np.minimum(np.abs(offset - 1), np.abs(offset - 0)) * np.random.rand(n)
        return np.concatenate([offset[:,np.newaxis], a[:,np.newaxis]], axis=1)

prior = MyDistribution()

b) Yes, I would expect longer tails on the side away from 0.5 in both a bootstrap and a Bayesian posterior. I would call this hedging rather than bias. Tails near a hard boundaries in the low-to-mid data regimes are to be expected. Though with enough data you will be in central limit theorem territory and things will be symmetric about the true value.

As an extreme example, suppose you flip a biased coin with unknown bias 100 times and get 100 heads. Assigning an estimate of p=1 is kind of crazy because it is very plausible that p=0.999, but an estimate p=1asserts you would be willing to bet everything that a tails will never happen. Therefore it is reasonable to have a posterior with a tail to the left of 1, which is exactly what you get if you use a uniform prior on p.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants