Machine intelligence makes human morals more important

Machine intelligence makes human morals more important

Machine intelligence is here, and we’re already using it to make subjective decisions. But the complex way AI grows and improves makes it hard to understand and even harder to control. In this cautionary talk, techno-sociologist Zeynep Tufekci explains how intelligent machines can fail in ways that don’t fit human error patterns — and in ways we won’t expect or be prepared for. “We cannot outsource our responsibilities to machines,” she says. “We must hold on ever tighter to human values and human ethics.”

Zeynep Tufekci is an associate professor at the School of Information and Library Science at the University of North Carolina at Chapel Hill with an affiliate appointment at the Department of Sociology. She is also a faculty associate at the Harvard Berkman Center for Internet and Society, and was previously a fellow at the Center for Information Technology Policy at the Princeton University. Tufekci’s research interests revolve around the intersection of technology and society. Her academic work focuses on social movements and civics, privacy and surveillance, and social interaction. She is also increasingly known for her work on “big data” and algorithmic decision making. Originally from Turkey, and formerly a computer programmer, Tufekci became interested in the social impacts of technology and began to focus on how digital and computational technology interact with social, political and cultural dynamics. Her work has appeared in a wide range of outlets, from peer-reviewed journals to traditional media and blogging platforms. Her forthcoming book Beautiful Teargas: The Ecstatic, Fragile Politics of Networked Protest in the 21st Century, to be published by Yale University Press, will examine the dynamics, strengths and weaknesses of 21st century social movements. 

Social Media Share

Leave a Reply

Your email address will not be published.