Prepare your teams for trustworthy AI

Where are you with AI?

Are you hesitant to use AI in your business?

Several reasons might hold you back from adopting AI:

  • Are you concerned that your employees or clients might have difficulty accepting it?
  • Are you worried about the risks of discrimination, bias, or hallucinations?
  • Do you doubt the robustness of the algorithms? Their ability to withstand attacks? To remain stable in production?
  • Do you deal with sensitive or critical use cases and fear the challenges of compliance? Especially with the upcoming European regulation ‘AI Act’ or its future sector-specific variations?

Do you already use ML models in production?

  • Are your employees not using it or using it sparingly due to lack of confidence?
  • Are your models suspected, rightly or wrongly, of being biased?
  • Are your models outdated, and their performance has declined?
  • Are you using AI for applications that are likely to be « high-risk » according to the AI Act? For example: credit scoring? anomaly detection? insurance pricing? churn probabilities? etc.

The support we offer

We are actively developing our product AI-vidence to offer it as a SaaS solution. We also aim to maintain a connection with our clients and offer our expertise and technologies to certain groups. This support is mutually beneficial: we provide you with our expertise in explainability, offer you the opportunity to test our technology on your use cases in advance… and we gather your needs, concerns, and feedback on our product!

In practice, we typically assist companies with the following services:

  1. Training on AI, explainability (challenges, concepts, and solutions), current initiatives in trustworthy AI.
    Compliance requirements: ongoing regulations, upcoming developments of the AI Act, and relevant authorities.
  2. Evaluation of one or more of your use cases: explainability of your current models or ad hoc modeling (with our AI-vidence software, in advance).
  3. Definition and implementation of a roadmap for trustworthy AI: audit of the current situation, ambitions and objectives after external comparisons, and implementation (governance).

The diagram above illustrates a three-phase support process:

  1. Training: Multiple training sessions, including a workshop on compliance issues, particularly focusing on the AI Act.
  2. Trustworthy AI by design: Analysis of one or more of your use cases using the AI-vidence solution. We assess the current level of explainability and associated risks, and may propose a substitute model if necessary.
  3. Audits and roadmap: Defining a deployment plan for trustworthy AI within your organization, including audits and the development of a strategic roadmap.