This project addresses inefficiencies in the prior authorization (PA) process for veterinary procedures. Healthy Pets receives requests that must be evaluated for coverage and clinical appropriateness. Some are handled by rule-based systems, but many require manual review - creating delays, operational burden, and inconsistency. This repository presents a machine learning model to predict which PA requests can be safely auto-approved.
- Predict whether a PA request should be auto-approved using machine learning.
- Reduce manual reviews while maintaining clinical appropriateness.
- Seamlessly integrate with existing rule-based systems.
The dataset used in this project was provided as part of a private case study by a company and is not publicly shareable.
However, the notebook and code are designed to work with any structured dataset containing prior authorization records (e.g., approval status, procedure type, provider info, claim history). You may adapt it using synthetic or public data with a similar schema.
Key findings from a dataset of ~1,900 prior authorization requests:
- 72.5% were approved.
- Approval likelihood increases for routine procedures (e.g., check-ups).
- Providers and prior history (claims, approvals) influence decision outcomes.
- Time since last request or claim is a strong predictor: “cold cases” tend to be denied.
Three models were evaluated:
- Logistic Regression
- Random Forest
- XGBoost
The Random Forest model was selected for its high F1 score and interpretability.
- Prior approvals and claims history
- Provider/service-level patterns
- Time-based features
This flow represents how PA requests move through the ML pipeline:
- Submission (PA Request)
- Preprocessing (Data Cleaning & Transformation)
- Feature Engineering (Claims, Timing, Provider History)
- ML Prediction (Random Forest)
- Threshold Decision (e.g., 0.90) 6a. Auto-Approval 6b. Manual Review
- Manual Decision Outcome
Model outputs were evaluated at various decision thresholds:
Threshold | Precision | Recall | F1 Score | Auto-Approval Rate |
---|---|---|---|---|
0.47 (Balanced) | 88% | 86% | 0.87 | ~74% |
0.90 (Conservative) | 98% | 42% | 0.58 | ~29% |
- Start with threshold = 0.90 to prioritize safety.
- Monitor precision, recall, and auto-approval rates in production.
- Use manual review for all low-confidence cases.
- Gradually lower the threshold after validation to expand automation coverage.
- Deploy the model in a pilot phase with selected providers or services.
- Continuously monitor model performance and manual review alignment.
- Retrain periodically with new data and feedback.
- Expand automation scope once real-world performance is validated.
healthy-pets-auto-approval-ml/
├── healthy_pets_prior_auth_model.ipynb # Machine learning model notebook
├── Presentation_Healthy_Pets_AutoApproval.pdf # Summary slides
├── diagram.png # Auto-approval flowchart image
└── README.md # Project documentation