News

Garber Announces Advisory Committee for Harvard Law School Dean Search

News

First Harvard Prize Book in Kosovo Established by Harvard Alumni

News

Ryan Murdock ’25 Remembered as Dedicated Advocate and Caring Friend

News

Harvard Faculty Appeal Temporary Suspensions From Widener Library

News

Man Who Managed Clients for High-End Cambridge Brothel Network Pleads Guilty

Columns

Harvard’s AI Guidance: A Lesson in Binary Thinking

Cogito, Clicko Sum

Harvard College's Office of Undergraduate Education is housed within University Hall. The OUE has issued the school's initial guidance on AI use in the classroom.
Harvard College's Office of Undergraduate Education is housed within University Hall. The OUE has issued the school's initial guidance on AI use in the classroom. By Frank S. Zhou
By Andy Z. Wang, Crimson Opinion Writer
Andy Z. Wang ’23, an Associate News Editor, is a Social Studies and Philosophy concentrator in Winthrop House. His column, ‘Cogito, Clicko Sum,” runs on triweekly Wednesdays.

Nearly sixty years ago, a revolution hit American classrooms: the portable calculator. A Science News article from 1975 claimed that for every nine Americans, there was one calculator in service. While the public rushed to purchase the new product, teachers had to grapple with a much more difficult question: How would these handheld devices change the mission and practice of education?

The answers were mixed. As Science News noted in 1975, the number-crunchers had the potential to “make tedious math fun, fast, and accurate,” and when used for “creative problem solving,” student motivation appeared “spontaneous.” At the same time, the piece echoed widespread worries that the “mechanization of fundamental classroom skills” might leave kids “unable to do simple math on paper.”

The question of calculators in classrooms, then, was not just a question of technology, but rather of the fundamental methods of education. In turn, the response could not just be a technical question of regulating the devices (although some certainly tried). Rather, the calculator spurred the so-called “math wars” a decade later, which interrogated the basic building blocks of a mathematical education.

These debates have continued to wage on; technology has always forced us to reevaluate education, and the recent meteoric rise of generative artificial intelligence tools like ChatGPT has proven to be no exception. Indeed, through issuing new guidance on AI, Harvard clearly recognizes its power in these discussions of the proper role of AI in the classroom.

But Harvard’s approach to date — both at the administrative and class level — has been too reactive. The right response to the advent of calculators was not blind acceptance or blanket prohibition, but rather a proactive conversation about how these devices would forever change math education, both for good and bad. Likewise, as generative AI enters the education landscape, students must learn the strengths and weaknesses of this new technology — not just whether or not they’re allowed to use it.

Unfortunately, Harvard’s guidance misses an opportunity to incite this conversation.

Issued by the Office of Undergraduate Education, the guidance does not set out a universal Faculty of Arts and Sciences-wide policy; rather, it encourages instructors to explicitly include an AI policy within their syllabi, suggesting either a “maximally restrictive” policy that treats use of AI as academic dishonesty, a “fully-encouraging” policy that encourages students to use AI tools provided they properly cite and attribute, or a “mixed” policy that lands somewhere in the middle.

While the outlining of these options may appear convenient, in effect, the OUE’s guidance to instructors does little more than provide administrator-approved wording to state “AI Yes,” “AI No,” and “AI Sometimes Yes, Sometimes No.” In doing so, the OUE sidesteps an opportunity for students, instructors, and administrators to work together to understand the role of generative AI in the classroom.

Open discussions around AI usage are especially crucial when we consider that the genie is already out of the bottle. A March survey revealed that one in five college students have used ChatGPT or other AI tools on their schoolwork, a figure that is sure to have risen in the months since. A blanket ban on the usage of AI systems seems futile, as the OUE guidance acknowledges: Instructors are told to try plugging their assignments into ChatGPT and then “assume that if given the opportunity, many of the students in your course are likely to do the same thing.”

Moreover, the OUE recognizes that employing AI-detection tools results in “something of an arms race.” Clever students have already found methods to circumvent AI-language detection, rendering a full ban on generative AI tools counterproductive.

Given this technology’s seemingly-inevitable expansion, students should understand the rationale behind AI-related classroom policies, and the onus should fall on Harvard to pave the way to understanding. Over the summer, the University’s Derek Bok Center for Teaching and Learning released its own suggestions for faculty, which involved a two-pronged approach to AI in the classroom: first, acknowledging the power of AI tools (for example, the ability to connect two different sources together), and second, explaining the pedagogical implications of these tools (such as banning AI tools on essays for the reason that the course is designed to teach these abilities). This guidance, importantly, recognizes that ChatGPT doesn’t serve a singular function: Just as much as it can instantly bang out a discussion post, it also can effectively proofread and suggest initial directions for research.

Unfortunately, suggestions of this nature did not make their way to the OUE’s final guidance. In neglecting these pedagogical questions within its University-wide suggestions, the OUE missed an opportunity to partner with students in navigating these new tools.

In the absence of official OUE guidance on providing reasoning for AI-use policies, individual instructors should push students to understand the strengths and limitations of this technology while acknowledging that students will almost inevitably use it. Though rare, a few syllabi I have reviewed do exactly that, going beyond the question of prohibition versus permission to provide students an opportunity to learn in a different way and see if it works for them.

In its two exams, HEB 1305: “The Evolution of Friendship” requires students to correct output generated by ChatGPT in response to an essay prompt. Using just lecture notes and readings, students demonstrate that they have mastery of the material, correcting the nuances that AI systems might miss. In this manner, students can see firsthand that generative AI tools often hallucinate information, especially on more technically advanced topics.

Jennifer Devereaux, the course head, wrote in an email to me that she believes “AI will inevitably become an integrated part of the learning experience.” Through her assignments, she hopes that students will learn “how invaluable critical thinking and traditional forms of research are to improving the health of the rapidly evolving information ecosystem they inhabit.”

Meanwhile, the syllabus of GENED 1165: “Superheros and Power” permits students to use ChatGPT for generating ideas and drafts, but presents a major caveat: Students may be asked to “explain to us just what your argument says.” In that manner, the primary task of idea-generation and intellectual ownership still must be completed by the student.

Stephanie Burt ’94, professor of English and head of the course, explained in an email that a complete AI ban is “hard to enforce for a large class,” leading to her decision to “OK AI with strong reservations.”

“I've never seen a good AI-generated essay,” she adds.

Ultimately, AI is here to stay. Instead of issuing an administrative rubber stamp, Harvard should push students, instructors, and researchers alike to question, discuss, and ultimately use it in a way that advances the core research mission of the university.

In an era where ChatGPT will soon be as common as calculators, Harvard's stance on AI in the classroom should be more than a binary decision — it should be an open dialogue that empowers students to navigate the AI landscape with wisdom and creativity.

Andy Z. Wang ’23, an Associate News Editor, is a Social Studies and Philosophy concentrator in Winthrop House. His column, ‘Cogito, Clicko Sum,” runs on triweekly Wednesdays.

Want to keep up with breaking news? Subscribe to our email newsletter.

Tags
Columns

Related Articles

University Hall Entrance