Skip to content
Nick Lucius edited this page Sep 28, 2017 · 2 revisions

Results of the 2017 Pilot

9/28/2017

Confusion Matrices

2017 Pilot Predicted Elevated Predicted Normal
Actual Elevated 9 (11.2%) 71 (88.8%)
Actual Normal 24 (2.0%) 1186 (98.0%)
2016 USGS Predicted Elevated Predicted Normal
Actual Elevated 2 (3.7%) 52 (96.3%)
Actual Normal 10 (1.2%) 825 (98.9%)
2015 USGS Predicted Elevated Predicted Normal
Actual Elevated 3 (3.2%) 91 (96.2%)
Actual Normal 14 (1.6%) 886 (98.4%)

Overall Performance

Model Specificity Sensitivity Correct Decisions
2017 Pilot 98.0% 11.2% 92.6%
2016 USGS 98.9% 3.7% 93.0%
2015 USGS 98.4% 3.2% 89.4%

9/19/2017

Predictive model performance

The model predicted 15 beaches using the rapid test results of 5 beaches. One beach, Juneway, was not tested this year by CPD, so this analysis is of the other 14. 381 is used as the threshold for whether a prediction would issue an advisory or not, which will be explained more in an upcoming paper. The model's TPR was 11.2% (9 / 80) and the FPR was 2.0% (24 / 1210). By comparison, when looking only at these 14 beaches, the USGS/EPA model was at 3.2% TPR / 1.6% FPR in 2015 and 3.7% TPR / 1.2 % FPR in 2016.

Hybrid method performance

This includes the rapid test results from the 5 beaches, which would be used to issue immediate notificiations alongside the predictions, if this approach were live. In 2017, 90 correct advisories would have been issued, compared to 71 incorrect advisories. In 2016, just relying on the USGS model resulted in 16 correct advisories, vs 119 incorrect. In 2015, the USGS model recommended 14 advisories, vs 184 incorrect ones.