About us

Our story

AI-vidence (pronounced ‘évidence’) was founded in April 2021 by David, formerly the AI director at PwC. Over more than ten years of leading digital projects, David recognized the need for trust and explainability in artificial intelligence models. The opaque nature of these models (black-box models derived from machine learning on data) makes their adoption difficult—they are not inherently explainable. However, the upcoming European regulation on AI (the AI Act) will require companies to explain their black boxes.

During the summer of 2021, AI-vidence won the tech sprint organized by the ACPR on the topic of explainability. Our ‘regional’ approach to explainability was thus recognized.

In 2022, AI-vidence joined the collective of industry leaders, Confiance.ai, funded by the French government through France 2030.

In 2023, two new partners joined the company: Laurent, who had worked with David ten years earlier in a consulting firm, and Pierre, who had been recruited by David at PwC a few years prior.



Polytechnician (X97), 47 years old, Vice President of the X-IA Group, with 18 years of consulting experience in innovation and digital transformation for large corporations. Passionate about the societal implications of technological advancements, he has also chaired a healthcare startup aimed at fostering intergenerational connections. Active since 2013, specifically focusing on Data topics, he is convinced of the systemic implications of massive data usage and its processing by AI. He has notably contributed to responsible AI projects for PwC, collaborating with Impact-AI and DataIA collectives, and through the XAI4AML chair with Telecom Paris.

He founded AI-vidence on the observation that solely pursuing predictive performance in machine learning is insufficient for successful and accepted deployments in enterprises. Both internal model users and external stakeholders such as regulators or end clients increasingly demand explanations for decisions made by AI systems.

David serves as President of AI-vidence, focusing primarily on strategy and R&D.

Laurent MICHEL

Laurent is a graduate of Télécom Paris Tech (class of 1993) and holds an MBA from the Collège des Ingénieurs. He is also an actuary. Innovation and digital technology have been the common thread throughout his career. Initially working as a consultant (at Bossard, Roland Berger, among others), then as an entrepreneur in the web industry (Géo12, af83, among others), and finally as an investor on behalf of the government. Laurent served as Director of the Digital Program at SGPI within the Prime Minister’s Services for 6 years (PIA, France 2030).

Laurent is a co-founder of AI-vidence and serves as the Chief Executive Officer, responsible for marketing, sales and IT.

Pierre HULOT

Pierre is a graduate of Polytechnique (X13) and holds a Master’s degree in Data Science from Polytechnique Montréal. He began his career as a consultant for 2 and a half years at PwC, specializing in trustworthy artificial intelligence issues. He then joined the startup Boxy (autonomous stores) as the lead data scientist.

In late 2023, Pierre joined AI-vidence as an associate director of data science, recognizing that deploying AI requires a strong trust and mastery of the algorithm and its operation.

Scientific council


Scientific Director at CRIL, University of Artois, Pierre is a researcher and one of the pioneers in AI. His work focuses on logic programming, formal and hybrid AI, as well as the injection and extraction of human knowledge into ML models.


Actuary and researcher at the University of Rennes, at UQAM and IVADO (Quebec), Arthur is an expert in actuarial science. His work primarily revolves around biases, explainability, and non-discrimination.


Professor and researcher in applied mathematics at Télécom Paris Tech, LTCI laboratory. Stéphan is one of the leading French contributors to NeurIPS.


Deep learning

Polytechnician, president of the X-IA group, Sophie has diverse experiences in Machine Learning topics, particularly Deep Learning, where achieving final explainability for extremely complex neural network algorithms is a significant challenge. Sophie has worked on models for face detection, object identification, classification, and enhancement of satellite and medical images using technologies such as Fast-RCNN, Occlusion Sensitivity, Yolov3, GalaxyZoo, ResNets, tf-explain, and Grad-CAM. Additionally, Sophie teaches Computer Vision at training institutes like Yotta Academy, both in TensorFlow and PyTorch.

Guillaume CHASLOT
Bias and ethics

Guillaume, an alumnus of École Centrale, holds a Ph.D. in Computer Science, an MSc in Artificial Intelligence, and is a TEDx Speaker. He is an expert in Artificial Intelligence bias. He founded AlgoTransparency.org to highlight biases in social media algorithms. Guillaume serves as an advisor at www.humanetech.com, the Center for Human Technology. He completed his doctoral thesis in artificial intelligence at Maastricht University in the Netherlands. He has previously worked for Microsoft, Google – YouTube, and served as a Mozilla Fellow.

Human and cognitive sciences

Astrid Bertrand, graduate of Centrale Lyon and holder of a Master of Science from HEC, is a doctoral student at Télécom Paris focusing on the explainability of AI. She is also a doctoral student in behavioral economics at the Institut Polytechnique de Paris under the supervision of Winston Maxwell (Télécom Paris) and David Bounie (Télécom Paris). Her research examines the explainability of artificial intelligence in financial applications (AML, Robo-advisors), with a focus on human-AI interactions and cognitive biases in decision-making based on AI. Astrid has participated in various events such as the ACPR Techsprint, the Cyber Mondays at the Leonard de Vinci Institute, the AI Mondays at Télécom Paris, and the ACPR on the psychological foundations of effective explanation. As part of her thesis, she collaborates with the Fintech-Innovation department at the ACPR.

Natural Language Processing

With a Ph.D. in Artificial Intelligence from Sorbonne University, CNRS, LIP6, Thomas is a researcher in Artificial Intelligence and a partner at the startup reciTAL, specializing in Natural Language Processing. He is also a professor at ESILV and Polytech Sorbonne. Thomas’ research mainly focuses on generative models, e.g., GPT3, with a particular interest in multilingualism. In this capacity, Thomas is one of the chairs of the bigscience.huggingface.co project, a large-scale collaborative project involving 250 institutions and 600 researchers, aiming to train the largest multilingual language model on the French supercomputer Jean Zay. Thomas publishes the results of his work annually in top international conferences (NeurIPS, ACL, EMNLP).

Ethics and Insurance

ESSEC, Master’s in Law (University Paris 1 Panthéon-Sorbonne), Exec. MBA (CHEA, University Paris Dauphine), Juris Doctor (Columbia Law School), Data Science Starter Program Certificate (Ecole Polytechnique Executive Education). Xavier Vamparys served as the Head of Artificial Intelligence Ethics at CNP Assurances until December 2021. He began his career in 1999 as a lawyer in the Paris and New York bars at Shearman & Sterling law firm. In 2006, he became Legal Counsel at Oddo Corporate Finance. In 2007, he joined BNP Paribas as a Senior Legal Counsel. In 2011, he joined CNP Assurances where he held various positions including International Legal Counsel, Corporate Legal Director, and Artificial Intelligence Mission Manager. He also engaged in knowledge sharing with the startup DreamQuark, specialized in AI for the financial sector. He authored the book « Blockchain in Finance – Legal Framework and Practical Applications, » published in October 2018. Xavier is a visiting researcher at Télécom Paris in the « Operational AI Ethics » laboratory and has published around forty articles in legal or financial journals.


ESCP Business School graduate, Thomas is an expert in training and change management support for business teams. He founded Catalix, a company that offers training courses and workshops on AI/data aimed at managers, project managers, and product managers to familiarize them with the specifics of applications incorporating Machine Learning. With over 20 years of experience in digital consulting firms, he is also a partner in a project for a free digital school open to all (coding, data, cybersecurity…).

Raphaël DOAN

A former student of the École Normale Supérieure de Paris and the École Nationale d’Administration, and an agrégé in classical literature, Raphaël Doan is a senior civil servant, historian, and essayist. He has notably published « Quand Rome inventait le populisme » (Cerf, 2019), « Le Rêve de l’assimilation » (Passés Composés, 2021), and recently « Si Rome n’avait pas chuté, » co-authored with generative artificial intelligence (Passés Composés, 2023). He is also an elected official in Le Pecq (78) and co-founder of the Vestigia project, aimed at using new technologies to promote historical research and popularization.