The Moral AI chair explores the the way humans and machines treat each other when they make decisions with a moral component. This includes contexts in which humans and machines cooperate or collude with each other, contexts in which machines make consequential decisions for human well-being, and contexts in which machines can shed new light on the consequential decisions that humans make about other humans.
Some examples of ongoing projects include:
- Exploring how humans want self-driving cars to distribute risks between road users
- Exploring the trust that humans have in the cooperative potential of machines
- Exploring the willingness of humans to be morally assessed by machines
- Exploring the social status that humans give to machines, and the resulting power dynamics
- Using machine learning to learn about the implicit attitudes of judges
- Using machine learning to learn about human biases about the facial appearance of others
Programme : IA Acceptable
Thème : IA et société
Porteur : Jean-François Bonnefon
Contact
jfbonnefon@gmail.com