Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

raise error for 1-sided confidence bounds with confidence levels below 50%? #126

Open
pbstark opened this issue Sep 27, 2017 · 1 comment

Comments

@pbstark
Copy link
Contributor

pbstark commented Sep 27, 2017

In routines like the binomial confidence interval, the search for the confidence bound assumes that the bound will be between the sample mean and the appropriate a priori bound (e.g., for an upper confidence bound, between \hat{p} and 1). If you call the routine with a confidence level below 50%, that assumption is false and the search algorithm will not converge.

In my experience, I've never seen a 1-sided confidence bounds with a confidence level below 50%. Should we raise an exception (tacitly on the assumption that the caller used \alpha instead of 1-\alpha)? Or should we (correctly) compute 1-sided confidence bounds with confidence levels below 50%?

I get the impression that pythonic style might not require either solution: the user is assumed to be somewhat sophisticated.

@jarrodmillman @stefanv

@stefanv
Copy link
Contributor

stefanv commented Sep 27, 2017

When designing APIs, we try and make a parameter mean one and only one thing (i.e., no black magic behind the scenes). You also have the option of a warning here, with something like "α is lower than expected (typically >0.5). Perhaps (1 - α) was provided?".

These warnings can be silenced by expert users.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

2 participants