Ai Toolkit Email Header

November 19, 2025

AI Center Updates Section Graphic

Building an AI-Ready IMSA: Community Dialogues, Alumni Expertise, Expanded Tools, and Policy Foundations

AI Bytes Wraps Up Semester Programming


The student-led AI Bytes series, currently focused on alumni speakers who work with AI across a variety of fields, has continued to draw students on Tuesday during the Support and Engagement period. Recent presenters have included Vikram Anjur ‘15 from Nvidia, Nathan Butters '06 from Salesforce, Jessica David ’06, data scientist, and Jake Gerstein ’95, from CIBC. The last program of the semester will be on December 2 featuring Susan Massey '01 focused on AI and law.


AI Reading Series Pivots to a Chat with Students about AI on 12/3


The last AI “Reading Series” event of the semester will be on December 3 from noon to 1 p.m. in the IN2 Learning Lab. Rather than focusing on a reading, the event will seek to create a forthright conversation between faculty, staff, administrators, and students about how AI is being used by students and teachers at IMSA three years after ChatGPT. Details on the event will come in a separate email soon.


Free Chat GPT 5 Paid Version Available to Faculty Thanks to Stephanie Pace Marshall Endowment from the IMSA Fund


Faculty members who wish to experiment with paid versions of an AI system, including those who are part of the ChatGPT Team pilot should contact aicenter@imsa.edu to request a new or renewed Chat GPT 5.0 license supported by the Stephanie Pace Marshall Endowment Fund withing the IMSA Fund. On a separate note, we’ll be reaching out to participants for feedback and to hear about how faculty have been using the tool.


Feedback and Input Sought for AI Classroom Guidelines and Policy


In the coming months, we’ll be working with faculty, staff, and administrators to solicit feedback on AI guidelines and policies for the institution, starting with an All-Academic Meeting introductory conversation today.


Join the Conversation Section Graphic

From Skepticism to Self-Awareness: The New Crossroads of AI and Public Trust


A new poll by the Pew Research Center finds that Americans are getting frustrated with artificial intelligence in their daily lives. The poll reflects a growing distrust and disillusionment with AI that goes beyond those most committed to an explicitly anti-AI stance. Average Americans are concerned about how AI tools could stifle human creativity, amplify bias, introduce inaccurate or misleading information, images, and videos, create negative environmental externalities, violate intellectual property rights, create “AI psychosis,” and undermine livelihoods. Per the Pew poll, “generally pessimistic” opinions about AI are significantly more widespread than they were before the introduction of ChatGPT three years ago.

 

Even as they continue to push forward with new AI systems, AI labs are taking measures to increase the reliability and trustworthiness of their systems. Current approaches to developing safeguards include filtering prompts and outputs, recalibrating models on the fly, and scrubbing massive datasets. Researchers are also working to give artificial intelligence the ability to recognize when it might be wrong and have found early signs that large language models can sometimes recognize their own inner workings. Tech companies and universities are exploring how models might not only generate outputs, but also describe the reasoning behind them. In a recent paper, scientists at Anthropic reported that the company’s most advanced systems occasionally noticed when engineers altered their internal computations. The team described this as “introspective awareness.”

 

Scientists caution that the term has nothing to do with consciousness or emotion in a way that might induce sci-fi fears of machine sentience. It simply describes a system’s ability to detect a statistical irregularity within its own patterns: the model can sense a discrepancy, but there is no feeling attached to it. In their book Introduction to Foundation Models, IBM scientists Pin-Yu Chen and Sijia Liu argue that the next frontier in AI is not greater power, or speed, but greater self-awareness, with systems that can sense uncertainty before their errors cause harm. As they say, “Technology doesn’t have to be perfect, but it should be honest about what it can and can’t do.” Beyond these technological efforts, education will play a significant role in how AI affects the future: models that better recognize their own limits, paired with users who understand those limits, offer the clearest path toward meaningful trust.


Is AI a Threat or a Tool? Higher Education's Deepening Debate


The Chronicle of Higher Education solicited contributions from 15 scholars to an opinion forum on “How AI is Changing Higher Education” [pdf], and even a glance at the headlines of the pieces shows an array of seemingly incompatible opinions:



  • Anti-AI positions such as “Chatbots are Antithetical to Learning” (Emily Bender) and “AI is Undermining our Trust in Reality” (Patricia Williams)
  • Dual approaches, such as “Students Need to Think With AI—and Without It” (Yasha Mounk) and “Thinking With—and beyond—AI” (Joseph Aoun)
  • Full-steam ahead approaches, including “We Must Prepare Students for an AI World” (Avinash Collis) and “Your Job is to Stay Ahead of AI” (Hollis Robbins)
  • Calls to rethink education in the wake of AI, including “AI Can Free Us Up for What’s Truly Important” (Arvind Narayanan), “We Need to Rethink Grading” (Jasyon Gulya), and “The Problem isn’t AI—It’s Complacency” (Ian Bogost).

 

A similar spectrum of opinions circulate among faculty, staff, and students within the IMSA community, and it’s clear that both higher education and IMSA will be grappling with the ramifications of generative AI for years to come.


Thanks for reading! More details on the student-faculty conversation soon, and the last fall semester AI Toolkit in two weeks.

AI Center

Illinois Mathematics and Science Academy

1500 Sullivan Road, Aurora, IL 60506-1000

630.907.5000 |  imsa.edu

Facebook  Instagram  Linkedin  Youtube

Notice of Nondiscrimination: IMSA prohibits sex discrimination in any education program or activity that it operates. Individuals may report concerns or questions to the Title IX Coordinator.