Artificial intelligence (AI) is driving profound changes that can be seen in many areas—economics, healthcare, finance, the military, and more. Alongside the vast opportunities it presents come numerous risks that we still struggle to predict or even perceive. Because AI now permeates every corner of human activity, we must build a framework of trust that ensures its outputs respect fundamental rights and freedoms. In 2023, an open letter signed by global experts and major industry leaders called for AI research and development to focus more on “making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” The authors also stressed the urgent need to work with policymakers to speed the creation of a solid governance framework for AI—one that includes new specialized regulatory authorities as well as audit and certification measures. The European AI Act marks a first step in this direction.

Regulating, auditing, and improving AI systems poses an unprecedented scientific challenge. Rising to it demands collaboration across many disciplines—computer science, mathematics, law, economics, and more. Our team is part of this research movement: we bring together expertise in machine learning, law, mathematics, optimization, computer science, and economics to create a fertile environment for reflection that addresses today’s grand challenges of trust and regulation. Our goal is to help lay the groundwork for a sustainable, ethical, and responsible future for AI technology, transforming the way these systems are managed and governed.
 

Ne manquez rien !

Inscrivez-vous pour recevoir l'actualité d'ANITI chaque mois.

Nous n’envoyons pas de messages indésirables !

en_GBEnglish