Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve indicator of accuracy #1835

Open
apiology opened this issue Mar 30, 2023 · 3 comments
Open

Improve indicator of accuracy #1835

apiology opened this issue Mar 30, 2023 · 3 comments

Comments

@apiology
Copy link
Member

how accurate do you think microCOVID is currently, with the (fairly subtle) helper text urging users to manually look up weekly new cases per 100k from covidactnow?

If folks think it's reasonably accurate, would it be possible to make that guidance more prominent (@jhwgh1968 's suggestion above - "remove the location-based prevalence data and force the user to enter it themselves before the other parts of the form show up" - would be best, but if we don't have the available skillsets for that, would be helpful to even just put the current text in red and also add a note in red above the November, 17, 2022 update towards the top of the page)?

If folks don't think it's accurate enough to do more good than harm, could we pursue @apiology 's suggestion above ("Mild frontend effort: Ordered shutdown of site with a note at top, disabling users from further use to avoid the hazard of using that stale data.") or if lacking resources for that, add a clear and prominent warning to that effect (perhaps red text in the updates section and under Step 1)?

There are local event venues in SF which are making decisions to loosen safety requirements citing microCOVID data as a source, which makes me think what we currently have in place is misleading. (I don't see a "bright data is stale" warning come up when I run an estimate, contrary to what @apiology mentioned in his "No effort" option #1 in the post at top.)

Unfortunately, I don't have the skills to write any code, but wanted to flag this in case anyone with the relevant skills has bandwidth to pursue some of the lower-effort options.

Originally posted by @sameerjain0123 in #1792 (reply in thread)

@zerotrickpony
Copy link

zerotrickpony commented Mar 30, 2023

TLDR I still like / use / recommend this tool, because the relative risk research + math still seems sound to me. But mc.org is "garbage in garbage out", and reported case rates are often garbage. So yes I vote for more angry red text to warn that reported case rates are an increasingly poor starting point for estimating prevalence.

I'm not a scientist or anything, but I'm pretty convinced that daily case rates undercount factor continues to change, and is currently probably in the territory of 30X-50X and climbing (evidence available upon request). So I tell my friends and family to manually type in one of only 3 values in the "cases per 100k" box: 250, 500, or 1000, depending on whether their local sewer data shows a dip, a plateau, or a spike, respectively. (These translate roughly to 0.5%, 1%, or 2% true prevalence, which I believe closer to right based on data from prevalence studies with better methodology.) I might change this guidance in the future based on variants or waves so I wouldn't be comfortable hard-coding these guesses permanently into the tool.

If public policy or event policy is being set based on garbage in, then I guess I have to agree that it's a harm in locales where test reporting has fallen off.

@apiology
Copy link
Member Author

apiology commented Apr 1, 2023

There's a few intertwined ideas in the above and a few potential approaches to address them, but know that contributors are considering them in the time they can offer. Thanks for speaking up, @sameerjain0123 and @zerotrickpony!

@sameerjain0123
Copy link

sameerjain0123 commented Apr 3, 2023 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants