U-M professors vary in approaches to generative AI in the classroom 

An illustration of a professor holding a robot head.

As the use of generative AI tools such as ChatGPT becomes more prevalent across everyday life, university professors grapple with how to handle the changing academic landscape that has followed. Some courses encourage the use of AI, while others restrict it or even ban it entirely. The Michigan Daily sat down with University of Michigan professors to learn their courses’ stances on AI use, the philosophies behind those policies and the impact they have had on teaching and learning goals.

Amie Gordon, assistant professor of psychology, developed PSYCH 382: Psychology of Close Relationships in Fall 2020 and has taught it every subsequent fall term except Fall 2023. In an interview with The Daily, Gordon said she took time to study how other classrooms adapted to ChatGPT, which was released in November 2022, before integrating it into her course in Fall 2024.

“I had a teaching leave the semester that AI really came out, so I didn’t have to deal with it at that point,” Gordon said. “In preparing for this fall, I spent a lot of time thinking about what I wanted to do about it. Luckily, I had a year to watch how other people integrated it, how they dealt with it and how the University was taking a stance on it, so I made the decision to let AI be a tool.” 

As per the course syllabus, PSYCH 382 students are allowed to use AI for course assignments as long as it is not the sole author of their work. Responding to initial issues with this policy, the instructional team decided that if AI was used, students were required to state the usage at the bottom of the assignment. 

“We started seeing people occasionally turning in assignments that seemed clearly to be written by AI with no attestation,” Gordon said. “After meeting with the GSIs, we made it a rule that everybody had to turn in a statement saying whether or not they used it. Rather than just saying you used it if you did, you also had to say if you didn’t use it. That probably created the most problems, because people would forget.”

Gordon said despite the course policy, most students submitted assignments without using AI.

“I think the vast majority of the class is not heavily using AI, only a handful of students,” Gordon said. “I think it often happens when people are overwhelmed and run out of time and the option is between not turning anything in and turning something in that ChatGPT wrote. I totally get that tension. Unfortunately, it usually comes back to bite you, but I can see where people make that choice.”

In an interview with The Daily, LSA senior Hannah Lubowitz, who took PSYCH 382 in Fall 2024, said the structure of the course helped her become more comfortable with using generative AI tools.

“Prior to this course, I did not use generative AI (tools) much because I didn’t like the way they interfered with my thought processes as it relates to my work,” Lubowitz said. “This semester was the first time where I not only started to become a little bit more comfortable using it, but I also (started to) understand the different perspectives surrounding AI usage in school.”

Lubowitz said she attempted to use ChatGPT to help make her writing more concise for assignments but was ultimately unsatisfied with the results.

“I’m definitely a wordy writer, so for certain assignments in this class, which were very writing-based, I was usually over the word count,” Lubowitz said. “Sometimes I would open ChatGPT and ask, ‘Could you help me cut down on the word count?’ However, it would often change the way I was writing and I didn’t like that very much.”

Ford School of Public Policy professor Devin Judge-Lord is teaching PUBPOL 475: Climate Change Politics for the Anthropocene this semester. Judge-Lord told The Daily that Bayesian classifiers, classification algorithms that predict which category something belongs to based on known information and probabilities and are central to many AI language models, are not yet reliable enough for research or academic work.

“We’ve had different types of Bayesian classifiers that can give you answers based on text,” Judge-Lord said. “The latest generation, with pre-trained transformers, didn’t work as well for the tasks that I had hoped they would be able to accomplish in the research setting. I wanted students to be cautious in using them in the academic context as well.”

Judge-Lord said large language models like those found in ChatGPT have difficulty finding and understanding nuanced information, such as what he covers in his courses. 

“Once, a student came to office hours and was asking pretty detailed questions about how government agencies worked,” Judge-Lord said. “They were asking ChatGPT questions while they were talking to me and I saw over their shoulder that it was returning things that were incorrect. I had to correct them, saying, ‘Don’t write based on that, because that’s not actually how that agency is structured.’ There’s not really any reason why a large language model would know that kind of information about Michigan state agencies.”

In an interview with The Daily, Ryan Hendrickson, French lecturer and coordinator for first-year French courses, said in his classes, AI cannot be used to supplement students’ submitted work.

“At the moment, we are treating it like any other tool or outside use that would fall under LSA policies on academic integrity,” Hendrickson said. “We tell students that if they submit any work that is not their own, in part or in its entirety, then that would fall under academic dishonesty. This could be other people, automated translators like Google Translate or generative AI.”

Despite restrictions, Hendrickson said one positive use of AI is that it allows students to supplement their learning outside of written assignments. 

“One suggestion that I have for my students is to use generative AI to have a conversation partner you can text back and forth with,” Hendrikson said. “You can give it a prompt to say, ‘I’m in my first semester of French: here’s what we’re studying, here’s the vocab, here’s the grammar. Talk to me as if we were both at that level.’ Then it can critique and give a report of any errors it detected. I do also caution students that sometimes it will not provide 100% accurate feedback or language use.”

Hendrickson said other course instructors make an effort to AI as they work to integrate it into their teaching strategies.

“I would call on other instructors to go out and explore it, to learn about it,” Hendrickson said. “I think it’s here, and I don’t think it’s going anywhere. If we can’t find ways to engage with it productively, then I think we’re doing a disservice to ourselves and to the students.”

Daily Staff Reporter Thomas Gala-Garza can be reached at tmgala@umich.edu

The post U-M professors vary in approaches to generative AI in the classroom  appeared first on The Michigan Daily.


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *