Dear colleagues,
With this newsletter, the McGraw Center hopes to provide an informed perspective and practical information on teaching and learning in the era of generative AI. We will offer a variety of viewpoints and approaches, with the aim of helping you decide when (if ever) and how to incorporate AI in a particular course. In this first edition, we will bring up some of the topics that faculty most often raise with us, including how to set a generative AI policy and how to talk to students about scholarly integrity. In future editions, we will also highlight how faculty at Princeton use AI in their courses. If you have suggestions for topics or case studies, please email the McGraw Center.
| | |
|
We encourage faculty to set a clear and unambiguous policy for the use of generative AI in their course. The policy should be stated on the syllabus and reinforced in conversations with students. In formulating a policy, you might find the following suggestions helpful:
Review section 2.4 Academic Regulations in Rights, Rules, Responsibilities (RRR) which has been updated with information about generative AI.
| | |
Align your generative AI policy with the goals of your course and discipline. Ask, for instance, if AI can help your students achieve any of your curricular or learning objectives. Is using AI a skill that students in your field will need to know? The McGraw Center’s website on generative AI has sample syllabus language and other resources you might find helpful.
Ask yourself whether generative AI will aid or hamper your students’ learning. Will allowing its use improve your students’ cognitive or emotional engagement, or might it diminish independent problem-solving skills and critical thinking? In this post, Dr. Philippa Hardman, a researcher on AI and learning, summarizes five recent studies on how generative AI might affect learning.
If you would like to discuss setting an AI policy or want help thinking through your assignments in light of AI, please contact Jessica Del Vecchio, Senior Associate Director for Teaching Initiatives and Programs for Faculty at the McGraw Center.
| | | Scholarly Integrity & Generative AI | |
| We recognize that many instructors are concerned about students’ use of generative AI and its implications for scholarly integrity. In thinking about AI and scholarly integrity, you might find Princeton's website Scholarly Integrity helpful. It contains information for both students and instructors and has a designated section on how students should disclose the use of generative AI if its use is permitted in your class. | | | AI Prevention or Detection Tools | | The McGraw Center does not support any technical solutions that can prevent or detect the use of generative AI. There is little evidence that such tools are reliable or effective. They might, however, be biased. In addition to stating a clear policy on your syllabus, we recommend that you reinforce scholarly integrity through conversations with your students. | | | Reinforcing Scholarly Integrity Through Conversation | |
Once you have set your generative AI policy and articulated it clearly on your syllabus, we encourage you to discuss it with your students. Explain your reasoning for creating the policy and offer insight into how you see it supporting your course’s learning goals. If you disallow AI use, explain how students’ engagement with AI tools interferes with the learning and skill-building you hope they will do in your course. Acknowledge the power of the tools for certain tasks and in specific contexts, but remind them of their responsibilities, as Princeton students, to honest intellectual engagement in their coursework. You might point out the ethical risks of using these tools, or ask students to weigh them against their potential benefits, especially if this type of critical thinking relates to the work they will do in your course.
If you are allowing or assigning the use of generative AI, explain to students what you see as their value. Be clear and specific about how you would like students to disclose their use of these tools, as is required in RRR. For example, do you want students to turn in their prompts or the full output from the AI or is a simple statement about how they used the tools enough? (Note that students must disclose its use for rewording, paraphrasing, or editing.) Offer concrete examples to illustrate what you see as acceptable and unacceptable uses in your course. Be transparent about your own use of these tools, and model responsible decision-making and disclosure for your students. Note, too, that some students may be resistant to using tools that are not supported by the university.
Keep the conversation going throughout the semester, reminding students of your policy as they work on different course assignments. Remember that students will be navigating different policies in their courses (and sometimes different policies on various assignments within the same course), so being as clear as possible about your expectations can help ensure students meet them.
| | |
| In the article How Are Students Really Using AI? in The Chronicle of Higher Education, Derek O’Connell surveys the data on students’ use and attitudes towards generative AI. While the data is disparate, some trends seem clear. First, and not surprisingly, the use of AI is increasing among all students. From year to year, more students report experience with AI. Second, usage increases by age and class year, with older students using AI more often than younger students. Third, utilization depends on the field of study. Humanities students are the least likely to use AI, while STEM and social science students are more frequent users. The data also reveals that students have a wide range of attitudes towards the ethics and usefulness of generative AI. | | |
Like their peers at other colleges, students at Princeton report extensive use of generative AI. According to this year’s Senior Survey conducted by The Daily Princetonian, more than 90% of the graduating seniors have used ChatGPT or another Large Language Model (LLM) at some point. For class work, 79% of seniors reported using a LLM on an assignment when it was allowed, and 28% reported using it for an assignment when it was not allowed. Note that not all students are comfortable using these platforms, even when allowed.
Students at Princeton also have quite varied opinions about the ethics and value of generative AI for academic purposes. To better understand student attitudes on this topic, you might find the following contrasting op-ed pieces in The Daily Princetonian helpful: Princeton, stop using ChatGPT and In defense of ChatGPT.
Given students’ diversity of experiences with and attitudes towards AI, it is important to cultivate a shared understanding of the AI policies in your course. Being precise about your policies and transparent about your rationale for the use (or not) of generative AI can help create buy-in.
| | Generative AI Tools for Teaching | |
|
Canvas and a number of other educational tools that the McGraw Center manages have recently announced AI updates to their software. In general, we take a very cautious approach to updates. We will not enable any generative AI tools in Canvas or the integrated applications without carefully vetting them and without it being a feature option for faculty (meaning something that you can opt into).
To learn the status of generative AI tools in Princeton’s Canvas ecosystem, please consult the AI Features page in Field Guide to Canvas at Princeton. For a broader overview of AI tools at Princeton, please consult OIT’s website Generative AI at Princeton.
| | |
Several courses on campus have experimented with course-specific chatbots, which can function much like a Q&A discussion forum. There are pros and cons to including this type of AI.
On the one hand, a bot might provide a safe space for students to ask questions that they otherwise would have felt too self-conscious to ask. If you suspect that your students are consulting ChatGPT or other LLMs for answers to content questions, they might get more accurate responses if they can consult a tool that has been trained on course-specific content. Providing all students with access to the same AI tool can also promote equity.
On the other hand, for many courses a chatbot is not a suitable tool. That might be because the content does not lend itself to a Q&A style format or because instructors might want students to come to office hours or discuss questions in class.
| |
Ed Discussion, which all Princeton instructors have access to through Canvas, has an add-on called Bots++, which you can enable with our help. You can connect Bots++ to your LLM of choice and train it on course-specific content. It is also possible to refine the core prompt to improve the results. We recommend that you use Bots++ in moderation mode at first, meaning that you and the rest of the teaching team review answers before they are released to students. Think of Bots++ as an assistant to the course team for answering student questions.
Please consult our page on Ed Discussion and Bots++ in the Field Guide to Canvas. If you’d like it enabled in your course, contact the McGraw Center’s Canvas team.
| | |
Faculty Discussion: GAI and Our Classrooms
Claire Baytaş and Dylan Ruediger of Ithaka S+R present the findings of their research project on generative AI’s impact on college teaching and learning. Featuring Sami Kahn, Executive Director of the Center on Science and Technology (CST) and part of the project’s Princeton research team.
| | |
Thursday, September 18
4:30-5:30 P.M.
Online
| | |
Faculty Discussion: GAI and Our Classrooms
Join us for a conversation with Jason Puchalla, Senior Professional Specialist & Lecturer in Physics, and Ben Johnston, Senior Educational Technologist at the McGraw Center, about how customized AI-driven chatbots can support student learning.
| | |
Thursday, October 23
4:30-5:30 P.M.
Online
| | |
In Conversation with AI: Generative AI in Online Discussion Boards
Bots++ is an AI-powered discussion board chatbot, integrated with the Ed Discussion platform currently available on request in Canvas. This session, led by Alex Hollinghead and Ben Johnston from the McGraw Center, will provide an opportunity to explore this new tool, walk through the process of setting up Bots++, and discuss considerations for bringing generative AI into class discussion boards.
| | |
Monday, October 30
2:30-3:30 P.M.
Online
| | |
Faculty Discussion: GAI and Our Classrooms
How might we engage students in the ethical use of AI tools in the classroom? Reflect on this question and others as we explore Princeton-specific examples of assignments that ask students to use AI and reflect on their interactions with it.
| | |
Tuesday, November 11
4:30-5:30 P.M.
Online
| | | © The Trustees of Princeton University | | | | |