AI Observability & Evaluation
-
Updated
May 24, 2024 - Jupyter Notebook
AI Observability & Evaluation
Evaluate and monitor ML models from validation to production. Join our Discord: https://discord.com/invite/xZjKRaNp8b
Sister project to OpenLLMetry, but in Typescript. Open-source observability for your LLM application, based on OpenTelemetry
This repository contains the notes, practice and sample outputs, homeworks, etc, from DataTalks.Club's MLOps Zoomcamp 2024 Cohort
Open-source observability for your LLM application, based on OpenTelemetry
🐢 Open-Source Evaluation & Testing for LLMs and ML models
nannyml: post-deployment data science in python
A python library to send data to Arize AI!
Toolkit for evaluating and monitoring AI models in clinical settings
Free MLOps course from DataTalks.Club
Deepchecks: Tests for Continuous Validation of ML Models & Data. Deepchecks is a holistic open-source solution for all of your AI & ML validation needs, enabling to thoroughly test your data and models from research to production.
Solutions to the MLOps Zoomcamp course
A curated list of awesome open source tools and commercial products for monitoring data quality, monitoring model performance, and profiling data 🚀
Aditi @ Building Production Grade LLMs - * Thank you AI community for visiting my repository - Stay tuned for awesome Dev-ops AI Learning Resources ** ** If you find the repository useful please give it a star - Thank you for visiting my repository - Happy Learning **
⚓ Eurybia monitors model drift over time and securizes model deployment with data validation
Version, share, deploy, and monitor models.
"1 config, 1 command from Jupyter Notebook to serve Millions of users", Full-stack On-Premises MLOps system for Computer Vision from Data versioning to Model monitoring and drift detection.
MLOps workshop with Amazon SageMaker
Experiments with Model Training, Deployment & Monitoring
Add a description, image, and links to the model-monitoring topic page so that developers can more easily learn about it.
To associate your repository with the model-monitoring topic, visit your repo's landing page and select "manage topics."