In the machine learning world, the deployed models are degraded over time. But they must be dynamic and responsive to real-world scenarios. After deployment, a periodic report on the problem-solving capabilities of the model is monitored and reported to the stakeholders.
The real challenge begins once the deployment is complete with giveaway prizes and the system is being monitored for performance. Thus, ML model monitoring is very important for ML engineers or data scientists.
What is ML Model Monitoring?
Monitoring the ML models developed and deployed in a business to maintain a positive business value is what ML monitoring is all about. Then, troubleshooting the input data or models in production and predicting the results are done.
The deployed model must be able to satisfy the stakeholders’ business requirements and other governance laws. In addition, the models must improve the results and maintain the business value. Hence, ML models must be monitored for their input data, upstream and downstream, infrastructure, service, and other maintenance.
Methods of Monitoring the ML Model
ML model monitoring is generally done in two ways:
Functional monitoring: Here, the data, predictions, and models are monitored. Data, which is the input, might face quality issues. Changes or loss in data at the source point may have occurred. In a similar pattern, model drift, model versions, and model configuration can become factors.
The output (prediction) is done using the ground truth/actual accuracy, production drift, or matrix models. These are done by data scientists or ML engineers.
Operational monitoring: This is the monitoring of usage and cost. The IT operators and mechanical engineers monitor the system performance metrics and system reliability.
Data pipeline monitoring includes data pipeline and model pipelines, which are monitored by the data engineers and the data operations team. The cost of hosting and interference is taken care of by the engineers.
Getting started with ML model monitoring
- The requirements are gathered by analyzing the business needs.
- A survey on the existing features, tools, and tools that the ML Engineers are already using must be completed.
- Choose a monitoring platform that can easily be adapted to the existing system and integrated to calculate all the metrics required.
- Set up the platform and program the alerts that have to be triggered whenever a threshold is attained.
- Make sure the system is continuously monitored and documented on the troubleshooting process for future reference and to backtrack the defects.
To track the performance of the ML model, you need to know the metrics that must be set to analyze them. Some of the metrics are:
● Accuracy
● Confusion matrix
● Log loss
● Coefficient of determination
● Mean absolute deviation of errors
● Precision
● Recall
● ROC and AUC
● Distribution of error
● F1 score
Apart from this, the metrics that we choose must be robust and adapt to future changes, and they must be realistic goals. Finally, choosing the good and bad models and their performance must be noted.
The data that must be used can be preprocessed, trained, and retrained whenever required. It is always good to start with a simple, interpretable model that can easily adapt to the existing one. ML model monitoring can be tracked when all of the above are considered.