Why do we urgently need to measure AI’s societal impacts? How will artificial intelligence systems change the way we live? This is a tough question: on one hand, AI tools are producing compelling advances in complex tasks, with dramatic improvements in energy consumption, audio processing, and leukemia detection.
There is extraordinary potential to do much more in the future. On the other hand, AI systems are already making problematic judgements that are producing significant social, cultural, and economic impacts in people’s everyday lives.
AI and decision-support systems are embedded in a wide array of social institutions, from influencing who is released from jail to shaping the news we see. For example, Facebook’s automated content editing system recently censored the Pulitzer-prize winning image of a nine-year old girl fleeing napalm bombs during the Vietnam War. The girl is naked; to an image processing algorithm, this might appear as a simple violation of the policy against child nudity. But to human eyes, Nick Ut’s photograph, “The Terror of War”, means much more: it is an iconic portrait of the indiscriminate horror of conflict, and it has an assured place in the history of photography and international politics. The removal of the image caused an international outcry before Facebook backed down and restored the image. “What they do by removing such images, no matter what good intentions, is to redact our shared history,” said the Prime Minister of Norway, Erna Solberg.
By Kate Crawford and Meredith Whittaker
https://medium.com