Our explainability solutions

Our journey towards trustworthy AI

The mission of AI-vidence is to make AI more explainable. Our conviction is that it is necessary to introduce a component of logical AI (reasoning based on rules) to enable the model to be robust and provide operational guarantees. See our page on trustworthy AI.

Similar to Daniel Kahneman and his systems 1 and 2, which illustrate the two modes of operation of the human brain, we believe that artificial intelligence must rely on these 2 modes: system 1, which corresponds more or less to machine learning, and system 2, which is the reasoning of logical AI.

Echoing these notions of systems 1 and 2 by D. Kahneman, AI-vidence’s ambition is to contribute to steering the runaway course of AI towards full connectionism, while considering trustworthy AI through logic.

Our approach to explainability

Whether dealing with simple AI models (such as regressions or decision trees) or complex ones (like deep neural networks), our approach to explainability incorporates the same key ingredients:

  • First, we identify the right scale of observation, the appropriate level of zoom. This lies at the heart of the regional explanation principle. In fact, we believe that this segmentation itself carries meaning and explanation of the phenomenon.
  • Then, within each region, we reduce complexity: by substituting the original model with simpler models, by reducing the number of variables after highlighting causal links, and by using a logical prediction model (based on rules and causes), such as Causalgo, one of our internally developed causality libraries.
  • Finally, it is essential to involve all stakeholders in explaining the phenomenon. While some AI-vidence modules for data scientists work in a Jupyter notebook, other tools shared with business and compliance teams allow real-time collaboration and promote collective intelligence. For certain use cases in specific industries, AI-vidence offers prototype interfaces for dynamically testing hybrid models before modification and deployment.

This approach was recognized by the ACPR during a tech sprint on explainability (read our article). It also led us to join the Confiance.ai collective.

Our solutions

We are a software publisher. Although we may occasionally charge for explainability consulting services, it is in order to stay close to the market, through select companies, on rapidly evolving new issues.

Our business model primarily revolves around selling licenses for our AI-vidence explainability software. We are considering several variations of our software:

  • AntakIA is our open-source solution. It is a simplified version of our approach but provides a comprehensive experience of regional explanation. Learn more about AntakIA.
  • AI-vidence is our complete explainability solution. AI-vidence is offered in several versions:
    • According to hosting modalities: on-premises or cloud
    • According to use cases: generic or tailored for specific sectors. These latter versions include interface prototypes (UI) allowing all departments to validate and better understand use cases.
  • On-premises versions include Python libraries and Docker images to deploy on your infrastructure. They offer the same features as those hosted in our cloud.
  • Licenses and pricing are annual.
  • Currently, only our open-source version AntakIA is available.
  • The « generic » on-premises AI-vidence will be launched sometime in 2024.
Aperçu d'AntakIA