|
As we mark the 75th anniversary of the “Turing Test,” Alan Turing’s “Imitation Game” that inaugurated thinking about machines’ ability to exhibit intelligent behavior similar to that of a human, we continue to grapple with the implications of the technologies that have emerged in its wake. The rapid growth of text and image generation over the last three years has sparked a new round of questions about how AI will impact society. Will the AI sources we’ve launched prove to be friendly helpmates or the heartless despots seen in dystopian films and fictions? A range of perspectives on the risks associated with AI are currently in play.
Some echo dystopian narratives and emphasize existential risks. The nonprofit research group AI Futures project, for example, has forecasted an AI apocalypse in 2027. The impact of superhuman AI over the next decade, they argue, will exceed that of the Industrial Revolution. At its extreme, this perspective predicts the annihilation of the human race, ‘AI x-risk’, unless we make different choices. Industry and academic researchers, including Geoffrey Hinton, Yoshua Bengio, Sam Altman, and others, share similar concerns related to AI control and AI alignment in relation to human goals, preferences, and ethical principles.
Many ordinary Americans, on the other hand, fear more mundane forms of dehumanization in daily life. In a Pew Research Center Poll this month, 53% of just over 5,000 U.S. Adults believe that AI will "worsen people’s ability to think creatively." 50% say AI will deteriorate our ability to form meaningful relationships, while only 5% believe the opposite. Even as AI tools gain in popularity and the industry celebrates the potential of automation, the poll highlights a growing distrust and disillusionment.
By proposing “AI as Normal Technology,” Princeton Computer Scientists Arvind Narayanan and Sayash Kapoor make the case that while AI is transformational, it is far from unprecedented. Neither an existential risk nor an enduring threat to humans, they argue, AI is likely to follow the same patterns as other technological revolutions, such as electrification, the automobile, and the internet. The pace of innovation doesn’t set the tempo of technological change, but rather by the pace of adoption, which is governed by economic, social, and infrastructure factors, allowing people time to adapt to the changes.
While some risks may be more plausible than others, most agree that the continued development of AI includes significant risks that need to be managed. Many leaders, then, encourage developers to build human-centered tools that empower individuals rather than a few large companies. The Brookings Institution, for example, has championed a distributed, pluralistic AI ecosystem where human agency and privacy are paramount.
|