From Skepticism to Self-Awareness: The New Crossroads of AI and Public Trust
A new poll by the Pew Research Center finds that Americans are getting frustrated with artificial intelligence in their daily lives. The poll reflects a growing distrust and disillusionment with AI that goes beyond those most committed to an explicitly anti-AI stance. Average Americans are concerned about how AI tools could stifle human creativity, amplify bias, introduce inaccurate or misleading information, images, and videos, create negative environmental externalities, violate intellectual property rights, create “AI psychosis,” and undermine livelihoods. Per the Pew poll, “generally pessimistic” opinions about AI are significantly more widespread than they were before the introduction of ChatGPT three years ago.
Even as they continue to push forward with new AI systems, AI labs are taking measures to increase the reliability and trustworthiness of their systems. Current approaches to developing safeguards include filtering prompts and outputs, recalibrating models on the fly, and scrubbing massive datasets. Researchers are also working to give artificial intelligence the ability to recognize when it might be wrong and have found early signs that large language models can sometimes recognize their own inner workings. Tech companies and universities are exploring how models might not only generate outputs, but also describe the reasoning behind them. In a recent paper, scientists at Anthropic reported that the company’s most advanced systems occasionally noticed when engineers altered their internal computations. The team described this as “introspective awareness.”
Scientists caution that the term has nothing to do with consciousness or emotion in a way that might induce sci-fi fears of machine sentience. It simply describes a system’s ability to detect a statistical irregularity within its own patterns: the model can sense a discrepancy, but there is no feeling attached to it. In their book Introduction to Foundation Models, IBM scientists Pin-Yu Chen and Sijia Liu argue that the next frontier in AI is not greater power, or speed, but greater self-awareness, with systems that can sense uncertainty before their errors cause harm. As they say, “Technology doesn’t have to be perfect, but it should be honest about what it can and can’t do.” Beyond these technological efforts, education will play a significant role in how AI affects the future: models that better recognize their own limits, paired with users who understand those limits, offer the clearest path toward meaningful trust.
Is AI a Threat or a Tool? Higher Education's Deepening Debate
The Chronicle of Higher Education solicited contributions from 15 scholars to an opinion forum on “How AI is Changing Higher Education” [pdf], and even a glance at the headlines of the pieces shows an array of seemingly incompatible opinions:
- Anti-AI positions such as “Chatbots are Antithetical to Learning” (Emily Bender) and “AI is Undermining our Trust in Reality” (Patricia Williams)
- Dual approaches, such as “Students Need to Think With AI—and Without It” (Yasha Mounk) and “Thinking With—and beyond—AI” (Joseph Aoun)
- Full-steam ahead approaches, including “We Must Prepare Students for an AI World” (Avinash Collis) and “Your Job is to Stay Ahead of AI” (Hollis Robbins)
- Calls to rethink education in the wake of AI, including “AI Can Free Us Up for What’s Truly Important” (Arvind Narayanan), “We Need to Rethink Grading” (Jasyon Gulya), and “The Problem isn’t AI—It’s Complacency” (Ian Bogost).
A similar spectrum of opinions circulate among faculty, staff, and students within the IMSA community, and it’s clear that both higher education and IMSA will be grappling with the ramifications of generative AI for years to come.
|