Ai Toolkit Email Header

September 24, 2025

AI Center Updates Section Graphic

AI Bytes, Reading Groups, and Symposium: A New Year of Exploration

The AI Center’s new Student Director is Deen Kareem ’26, and he’s working with several student interns to organize plans for the year. The AI Bytes series will kick off during Support & Engagement on Tuesday, October 21, featuring alumnus Vikram Anjur ’15, who is an AI developer who has worked for Apple and currently collaborates with Nvidia. Several students have attended the sessions, and we encourage interested faculty and staff to join as well.


The Inspire and Ignite Symposium, scheduled for this October, will feature compelling sessions on Artificial Intelligence, coordinated jointly by the Principal’s Office, AI Center, and Outreach. Among other topics, sessions will explore faculty uses of AI in the classroom, share what’s new and useful with AI tools, provide a window into the inner workings of generative AI models, explore ethical use and guidelines in the classroom, and explain the intersections of AI and neuroscience.


The first AI reading group session of the year will take place today, and we’ll share information about upcoming sessions soon.

Join the Conversation Section Graphic

AI at 75:

Existential Threat or Everyday Tool?

turing-test image

As we mark the 75th anniversary of the “Turing Test,” Alan Turing’s “Imitation Game” that inaugurated thinking about machines’ ability to exhibit intelligent behavior similar to that of a human, we continue to grapple with the implications of the technologies that have emerged in its wake. The rapid growth of text and image generation over the last three years has sparked a new round of questions about how AI will impact society. Will the AI sources we’ve launched prove to be friendly helpmates or the heartless despots seen in dystopian films and fictions? A range of perspectives on the risks associated with AI are currently in play.


Some echo dystopian narratives and emphasize existential risks. The nonprofit research group AI Futures project, for example, has forecasted an AI apocalypse in 2027. The impact of superhuman AI over the next decade, they argue, will exceed that of the Industrial Revolution. At its extreme, this perspective predicts the annihilation of the human race, ‘AI x-risk’, unless we make different choices. Industry and academic researchers, including Geoffrey Hinton, Yoshua Bengio, Sam Altman, and others, share similar concerns related to AI control and AI alignment in relation to human goals, preferences, and ethical principles.


Many ordinary Americans, on the other hand, fear more mundane forms of dehumanization in daily life. In a Pew Research Center Poll this month, 53% of just over 5,000 U.S. Adults believe that AI will "worsen people’s ability to think creatively." 50% say AI will deteriorate our ability to form meaningful relationships, while only 5% believe the opposite. Even as AI tools gain in popularity and the industry celebrates the potential of automation, the poll highlights a growing distrust and disillusionment.


By proposing “AI as Normal Technology,” Princeton Computer Scientists Arvind Narayanan and Sayash Kapoor make the case that while AI is transformational, it is far from unprecedented. Neither an existential risk nor an enduring threat to humans, they argue, AI is likely to follow the same patterns as other technological revolutions, such as electrification, the automobile, and the internet. The pace of innovation doesn’t set the tempo of technological change, but rather by the pace of adoption, which is governed by economic, social, and infrastructure factors, allowing people time to adapt to the changes.


While some risks may be more plausible than others, most agree that the continued development of AI includes significant risks that need to be managed. Many leaders, then, encourage developers to build human-centered tools that empower individuals rather than a few large companies. The Brookings Institution, for example, has championed a distributed, pluralistic AI ecosystem where human agency and privacy are paramount.


In The Classroom Section Graphic

AI Co-Intelligence in Action: 4 Rules to Try

In Ethan Mollick’s Co-Intelligence: Living and Working with AI, the featured read of last year’s AI reading group, he shares four approaches to generative AI that may be useful to those working to explore what all the fuss is about. Mollick’s title identifies his approach—he imagines AI as a supportive tool that people can interact with.


Mollick’s Four Rules for Co-Intelligence are:


  • “Always invite AI to the table.” While many believe AI is being overused in too many tools, Mollick urges widespread experimentation. By exploring the usefulness of AI in various areas of work, he argues, we can better determine where it is useful and where it is not.


  • “Be the human in the loop.” Assume that AI systems rely on the informed and ethical guidance of a human perspective. Being the human in the loop means not trusting AI systems implicitly, shaping their outputs to human ends, and being vigilant for misinformation and potential harms.


  • Treat AI like a person (but tell it what kind of person it is).” While there are risks to anthropomorphizing AI, Mollick argues that contemporary chat-based systems are most useful when they’re interacted with like imaginary people or characters. Even as the user must keep in mind that the AI is not, in fact, a person, it has the power to shape the kind of person it acts like—a tutor, an instructional designer, a critical editor, or a mentor.


  • Assume this is the worst AI you will ever use.” Mollick argues that it’s essential to experiment with AI systems now, as continued advancement will only make them more pervasive and useful in the future.


In her newsletter, “Step Up Together,” Alyssa Fu Ward shares a compelling Sketchnote that maps out the four principles, as pictured below.

Four Rules for Co-Intelligence

The AI Center has copies of Co-Intelligence available for interested readers to borrow. Reach out to aicenter@imsa.edu to borrow this great read.

This edition of AI Toolkit featured contributions by: Dr. Eric Rettberg and Dr. Ashwin Mohan. Thank you for reading!

AI Center

Illinois Mathematics and Science Academy

1500 Sullivan Road, Aurora, IL 60506-1000

630.907.5000 |  imsa.edu

Facebook  Instagram  Linkedin  Youtube

Notice of Nondiscrimination: IMSA prohibits sex discrimination in any education program or activity that it operates. Individuals may report concerns or questions to the Title IX Coordinator.