Discover how we utilized Explainable AI (XAI) techniques to monitor model performance and detect concept and data drift for DAT Group.

By implementing a comprehensive and future-proof approach, we empowered our client to maintain optimal model performance and ensure data integrity, resulting in cost savings and increased trust in their models.

Context & Key Challenges

DAT Group is an international company operating as a trust in the automotive industry. For over 90 years, they provide data products and services in the automotive sector that focus on enabling a digital vehicle lifecycle.

One of their key products is the price estimation of used cars. It’s used by various customers, from insurance companies to original equipment manufacturers. To estimate prices, they leveraged both domain expertise and market data. The workflows for processing and analyzing data was primarily manual, which made it impossible to scale, accelerate, and automate the information retrieval process.

As part of the AI roadmap we supported DAT with, we automated these manual data processes and developed a machine learning (ML)solution that allowed fordata-driven prices estimations of used cars. These solutions enabled the team to take real-time data-driven decisions.

As time passed, our client’s teams faced several challenges related to their ML model performance, including:

  • Degradation of model performance over time due to changing data patterns and market conditions (concept drift and data drift).
  • Difficulty in orchestrating and determining the right time to trigger model re-training in order to maintain optimal performance.
  • Ensuring data integrity for new incoming data, preventing the introduction of noise and biases into the model.
  • Manual monitoring of the impact of data drift on model performance.

Our Approach to Model Monitoring using Explainable AI

To address these challenges, we took a comprehensive and future-proof approach comprising the following steps:

1. Implementation of automated data drift detection using SHAP

We leveraged the SHAP (Shapley Additive Explanations) library to continuously evaluate and track the SHAP values for every new incoming data point.

In general, SHAP values provide insights into the contribution of individual features to the prediction of a model for a single data-point. Any changes in the distribution of SHAP values indicate that the statistical pattern in the new data may has shifted over time, such that the model’s assumptions about the data are no longer accurate. This phenomenon is commonly referred to as concept drift. By monitoring the SHAP values we can precisely detect when such concept drift occurs and take appropriate measures.

2. Continuous visualization in a model monitoring dashboard

We developed a dynamic dashboard to visualize model performance using on the one hand standard evaluation metrics such as RMSE and MAE and on the other hand SHAP values for each feature. This allowed the client to easily monitor their models, identify any performance issues, and understand how data drift was affecting the model’s accuracy.

Banner with a title and speakers redirecting to the on-demand webinar on Building Responsible GenAI Product from CBTW

3. Automated notification for detected data drift

We set up an automated email notification system to alert the Product Owner and Data Scientists either when model performance is degrading or when concept drift is detected. This ensured that the relevant stakeholders were promptly informed, and could take appropriate actions, such as adjusting the model’s parameters or initiating retraining, depending on the severity of the drift.

4. Thorough instruction on model retraining

We provided in-depth training to the Product Owner and Data Scientists on how to retrain their models when necessary. This guidance covered various aspects, including identifying the need for retraining, selecting the appropriate training data, validating the new model’s performance, and deploying the updated model in production. This enabled them to maintain optimal model performance and make better decisions on when to trigger retraining.

Benefits

By implementing this comprehensive approach of Model Monitoring using XAI, our client experienced several benefits, including:

  • Prevention of outdated models in production, ensuring that their models continued to provide accurate predictions as data patterns evolved.
  • Improved model performance over time, as the system was able to adapt to changing data patterns and maintain a high level of accuracy.
  • Increased trust in their models due to higher visibility of performance metrics, enabling stakeholders to make more informed decisions based on the model’s predictions.
  • Cost savings by triggering retraining only when required, avoiding unnecessary retraining efforts and reducing the overall maintenance costs.
  • Greater control over the quality of their models, allowing them to fine-tune model parameters and ensure consistent performance.

This approach enhanced ML model reliability, performance, and stakeholder trust, ensuring adoption and understanding thanks to XAI.

Team Involved

One Data Scientist and XAI Engineer work closely with the client’s data science team over a 4-month period.

  • Data Scientist: Focused on designing the monitoring solution, establishing performance metrics, and ensuring the system met analytical requirements.
  • XAI Expert: Centered on implementing explainability features, creating model interpretation tools, and ensuring stakeholders could understand system decisions.

The engagement ensured the client received both a functional monitoring solution and the knowledge needed to maintain and operate it independently.

Azure CTA
Share
Insights

Access related expert insights

Expert Articles
Expert Articles
17 Apr 2026
SEO meta title: The hidden cost of routine customer queries in retail What “routine” really means in retail customer service In retail, “routine” doesn’t mean “easy.” It means repeatable. WISMO (Where Is My Order), returns, delivery changes, missing items: these are predictable intents. But they often involve multiple systems, policy rules, and exceptions. That’s why […]
The Hidden Cost Of Routine Customer Queries In Retail
The Hidden Cost Of Routine Customer Queries In Retail
Expert Articles
Expert Articles
14 Apr 2026
The race to adopt artificial intelligence has moved faster than almost any technological shift in history. According to McKinsey’s 2025 State of AI report, 88% of organizations have now integrated AI into at least one business function – a significant jump from just 78% a year prior. While generative AI adoption has more than doubled […]
AI Governance in APAC: The Executive’s Blueprint for Digital Trust
AI Governance in APAC: The Executive’s Blueprint for Digital Trust
Case Studies
Case Studies
10 Apr 2026
CBTW helped Finacca modernize its investigation platform by replacing a legacy ERP hosted in its Paris offices with a scalable Mendix low-code solution. Starting with an MVP dedicated to dormant life-insurance investigations, the team accelerated development, improved investigator workflows, and built the foundation for a broader digital platform. The result: faster case management, secure cloud access, and an architecture ready to support new applications and future AI-driven capabilities.
How Finacca Modernized its Life-Insurance Investigation Platform
How Finacca Modernized its Life-Insurance Investigation Platform