Goals of the chair

Research on ethical artificial intelligence has primarily focused on value alignment —the challenge of ensuring that intelligent machines operate in accordance with human moral norms and values.

This approach assumes that morals are fixed: either prescriptive, as defined by experts, or descriptive, as uncovered by behavioral science.

Principal investigator

While this perspective is valuable, it also limits exploration of AI's potential to shape moral norms. New research is needed to account for the dynamic nature of moral norms and the role intelligent machines may play in reinforcing, challenging, or transforming them. The Center for Moral AI (MoAI) aims to study the impact of intelligent machines on human moral norms and values using a multidisciplinary approach that combines moral psychology, experimental economics, and computer science.
MoAI projects begin by identifying the moral values likely to be affected by a given technology and why, drawing on insights from moral psychology. They then use experimental economics to design incentive-compatible protocols that measure the technology's impact on those values.

Finally, computer science is used to develop a simplified or prototype version of the technology to be used in experiments—often despite the fact that the technology does not yet exist. This proactive approach allows us to prepare society for upcoming technological shifts and guide AI development to avoid harmful outcomes.

Ne manquez rien !

Inscrivez-vous pour recevoir l'actualité d'ANITI chaque mois.

Nous n’envoyons pas de messages indésirables !

en_GBEnglish