Many data scientists, ML researchers and their model-driven organizations have developed a model to solve a specific problem. They may also experience relief when the model “works” in production. Yet, what happens when there are hundreds of models running in production and interacting with the real world? What happens when no one is keeping track of how the models are performing on live data? Unfortunately, bias and variance can creep into models over time, which can cause them to drift into worthlessness. These outcomes are not ideal for producing a positive impact on the business.
As Domino seeks to accelerate research and reinforce data scientists’ positive impact upon their companies, we reached out to Don Miner to collaborate on a webinar, “Machine Learning Vital Signs: Metrics and Monitoring Models in Production,” that details metrics and monitoring, or the tracking of machine learning models in production to ensure model reliability, consistency and performance into the future. Miner’s prior experience as a data scientist, engineer, and CTO contributes to his unique and pragmatic perspective. This blog post includes slide excerpts and a couple of key ML vital signs including accuracy and output distribution, and you can attend the full webinar for more vital signs and in-depth insights.
Continue reading on Domino’s blog.