Discover how we utilized Explainable AI (XAI) techniques to monitor model performance and detect concept and data drift for DAT Group.

By implementing a comprehensive and future-proof approach, we empowered our client to maintain optimal model performance and ensure data integrity, resulting in cost savings and increased trust in their models.

Context & Key Challenges

DAT Group is an international company operating as a trust in the automotive industry. For over 90 years, they provide data products and services in the automotive sector that focus on enabling a digital vehicle lifecycle.

One of their key products is the price estimation of used cars. It’s used by various customers, from insurance companies to original equipment manufacturers. To estimate prices, they leveraged both domain expertise and market data. The workflows for processing and analyzing data was primarily manual, which made it impossible to scale, accelerate, and automate the information retrieval process.

As part of the AI roadmap we supported DAT with, we automated these manual data processes and developed a machine learning (ML)solution that allowed fordata-driven prices estimations of used cars. These solutions enabled the team to take real-time data-driven decisions.

As time passed, our client’s teams faced several challenges related to their ML model performance, including:

  • Degradation of model performance over time due to changing data patterns and market conditions (concept drift and data drift).
  • Difficulty in orchestrating and determining the right time to trigger model re-training in order to maintain optimal performance.
  • Ensuring data integrity for new incoming data, preventing the introduction of noise and biases into the model.
  • Manual monitoring of the impact of data drift on model performance.

Our Approach to Model Monitoring using Explainable AI

To address these challenges, we took a comprehensive and future-proof approach comprising the following steps:

1. Implementation of automated data drift detection using SHAP

We leveraged the SHAP (Shapley Additive Explanations) library to continuously evaluate and track the SHAP values for every new incoming data point.

In general, SHAP values provide insights into the contribution of individual features to the prediction of a model for a single data-point. Any changes in the distribution of SHAP values indicate that the statistical pattern in the new data may has shifted over time, such that the model’s assumptions about the data are no longer accurate. This phenomenon is commonly referred to as concept drift. By monitoring the SHAP values we can precisely detect when such concept drift occurs and take appropriate measures.

2. Continuous visualization in a model monitoring dashboard

We developed a dynamic dashboard to visualize model performance using on the one hand standard evaluation metrics such as RMSE and MAE and on the other hand SHAP values for each feature. This allowed the client to easily monitor their models, identify any performance issues, and understand how data drift was affecting the model’s accuracy.

Banner with a title and speakers redirecting to the on-demand webinar on Building Responsible GenAI Product from CBTW

3. Automated notification for detected data drift

We set up an automated email notification system to alert the Product Owner and Data Scientists either when model performance is degrading or when concept drift is detected. This ensured that the relevant stakeholders were promptly informed, and could take appropriate actions, such as adjusting the model’s parameters or initiating retraining, depending on the severity of the drift.

4. Thorough instruction on model retraining

We provided in-depth training to the Product Owner and Data Scientists on how to retrain their models when necessary. This guidance covered various aspects, including identifying the need for retraining, selecting the appropriate training data, validating the new model’s performance, and deploying the updated model in production. This enabled them to maintain optimal model performance and make better decisions on when to trigger retraining.

Benefits

By implementing this comprehensive approach of Model Monitoring using XAI, our client experienced several benefits, including:

  • Prevention of outdated models in production, ensuring that their models continued to provide accurate predictions as data patterns evolved.
  • Improved model performance over time, as the system was able to adapt to changing data patterns and maintain a high level of accuracy.
  • Increased trust in their models due to higher visibility of performance metrics, enabling stakeholders to make more informed decisions based on the model’s predictions.
  • Cost savings by triggering retraining only when required, avoiding unnecessary retraining efforts and reducing the overall maintenance costs.
  • Greater control over the quality of their models, allowing them to fine-tune model parameters and ensure consistent performance.

This approach enhanced ML model reliability, performance, and stakeholder trust, ensuring adoption and understanding thanks to XAI.

Team Involved

One Data Scientist and XAI Engineer work closely with the client’s data science team over a 4-month period.

  • Data Scientist: Focused on designing the monitoring solution, establishing performance metrics, and ensuring the system met analytical requirements.
  • XAI Expert: Centered on implementing explainability features, creating model interpretation tools, and ensuring stakeholders could understand system decisions.

The engagement ensured the client received both a functional monitoring solution and the knowledge needed to maintain and operate it independently.

Azure CTA
Share
Insights

Access related expert insights

Expert Articles
Expert Articles
29 Oct 2025
Using AI-driven algorithms, audio analytics detects subtle changes in frequencies or vibrations, allowing businesses to predict machine failures, optimize processes, and extend equipment life. Learn how audio analytics uses AI to transform machine sounds into actionable insights.
Listening to Machines: How Audio Analytics Can Reveal Hidden Insights for Industry
Case Studies
Case Studies
28 Oct 2025
Learn how we helped a leading European automotive brand API Modernization with Kong – reducing incidents by 94%, boosting operational efficiency by 46%, and securely managing over 50 billion API calls each year. Discover how a structured modernization strategy can deliver agility, governance, and resilience.  Why API Modernization Matters  API Modernization is essential for organizations […]
API Modernization with CBTW for Secure, Scalable Ecosystems 
Case Studies
Case Studies
24 Oct 2025
A large public organization needed a better way to coordinate hundreds of inspections across the pharmaceutical and medical sectors. CBTW developed a modular digital inspection management solution built on Mendix, integrating seamlessly with existing systems. The tool simplifies campaign planning, field reporting, and compliance tracking while improving collaboration and visibility. Designed for usability, reliability, and performance, it gives inspectors and planners one centralized platform to manage the entire inspection lifecycle efficiently.
How Low-Code Improves System Integration for Planning, Reporting, and Monitoring Processes