In February, Ella Stapleton, a senior at Northeastern University, was going over her notes from an organizational behavior class when she stumbled upon something unusual. Was that a ChatGPT question from her professor?
Within a document created by her business professor for a leadership model lesson, she noticed instructions to chat “Expand all areas. More in depth and concrete.” Following these instructions was a list of leadership traits, both positive and negative, complete with definitions and bullet points.
Stapleton texted a classmate.
“Did you see the notes he uploaded to Canvas?” she asked, referring to the university’s software for course materials. “He created it using ChatGPT.”
“OMG STOP,” her classmate responded. “What’s going on?”
Curious, Stapleton began to investigate. She went through the professor’s slides and discovered more signs of AI involvement: inconsistencies in the text, skewed images, and glaring mistakes.
She was frustrated. Given the school’s tuition and reputation, she expected a high-quality education. This course was crucial for her business major. The syllabus clearly prohibited “academic fraudulent activities,” including the misuse of AI and chatbots.
“He tells us not to use it, yet he uses it himself,” she remarked.
Stapleton lodged a formal complaint with Northeastern’s business school, citing the inappropriate use of AI and other concerns about teaching methods, demanding a refund of the tuition for that class, which was over $8,000—about a quarter of her semester’s total.
When ChatGPT launched in late 2022, it created a whirlwind of concern across educational institutions It’s incredibly easy. Students tasked with writing essays could easily let the tool handle it in mere seconds. Some institutions banned it, while others introduced AI detection services, despite concerns about their accuracy.
However, the tide has turned. Nowadays, students are scrutinizing professors for their heavy reliance on AI, voicing complaints on platforms that analyze course content, using terms like “ChatGPT is” essential” and “algorithmic.” They call out hypocrisy and make financial arguments, insisting they deserve instruction from humans—not algorithms they can access for free.
On the other side, professors have claimed they use AI chatbots as a means to enhance education. An instructor interviewed by The New York Times stated that the chatbot streamlined their workload and acted as an automated teaching assistant.
The number of educators using these tools is on the rise. In a National Survey conducted last year, 18% of over 1,800 higher education instructors identified as frequent users of generative AI tools. This year’s follow-up surveys have nearly doubled that figure, according to Tyton Partners, the consultancy behind the study. AI companies are eager to facilitate this shift, with startups like OpenAI and Anthropic recently releasing enterprise versions of chatbots designed specifically for educational institutions.
(The Times is suing OpenAI for copyright infringement, as the company allegedly used news content without permission.)
Generative AI is clearly here to stay, yet universities are grappling with adapting to evolving standards. Professors are navigating this learning curve and, like Stapleton’s instructor, often misinterpret the risks of technology and student negligence.
Make a grade
Last fall, 22-year-old Marie submitted a three-page essay for her online anthropology course at Southern New Hampshire University. Upon checking her grades on the school’s platform, she was pleased to see an A. However, in the comments, her professor made multiple references to using ChatGPT, which included a grading rubric meant for chatbots and a request for “great feedback” for Marie.
“To me, it felt like the professor didn’t even read my work,” Marie shared, asking to remain anonymous. She noted that the temptation to lean on AI in academia was like having a “third job” for many instructors managing numerous students.
Marie confronted her professor during a Zoom meeting about this issue. The professor claimed that they had read her essays but used ChatGPT as an approved guide.
Robert McAuslan, Vice President of AI at Southern New Hampshire, expressed that schools should embrace AI’s potential to revolutionize education, emphasizing guidelines for faculty and students to “ensure this technology enhances creativity rather than replaces it.” A do’s and don’ts were recommended to encourage authentic, human-focused feedback among teachers utilizing tools like ChatGPT and Grammarly.
“These tools should not replace the work,” Dr. McAuslan stated. “Instead, they should enhance an already established process.”
After encountering a second professor who also appeared to provide AI-generated feedback, Marie opted to transfer to another university.
Paul Schoblin, an English professor at Ohio University in Athens, empathized with her frustration. “I’m not a huge fan of that,” Dr. Schoblin remarked after hearing about Marie’s experience. He also holds a position as an AI Faculty Fellow, tasked with developing effective strategies to integrate AI in teaching and learning.
“The real value you add as an educator comes from the feedback you provide to your students,” he noted. “It’s the personal connection we foster with our students, as they are directly impacted by our words.“
Though advocating for the responsible integration of AI in education, Dr. Schoblin asserted that it shouldn’t merely simplify instructors’ lives. Students must learn to utilize technology ethically and responsibly. “If mistakes happen, the repercussions could lead to job loss,” he warned.
He cited a recent incident where a Vanderbilt University School of Education official responded to a mass shooting at another university. An email sent to students emphasized community bonds. However, a sentence disclosed that ChatGPT was used to compose it. Students criticized the outsourcing of empathy, prompting involved parties to temporarily resign.
However, not all situations are straightforward. Dr. Schoblin remarked that establishing reasonable rules is challenging, as acceptable AI usage can differ based on the subject. His department’s Centre for Teaching, Learning, and Assessment has instead emphasized principles regarding the integration of AI, specifically eschewing a “one-size-fits-all” algorithm.
The Times reached out to numerous professors whose students had noted AI usage in online reviews. Some instructors admitted to using ChatGPT to create quizzes for computer science programming assignments, even as students reported that these quizzes didn’t always make sense. They also used it for organizing feedback or to make it more positive. As experts in their fields, they noted instances of AI “hallucinations,” where false information was generated.
There was no consensus among them on what practices were acceptable. Some educators utilized ChatGPT to assist students in reflecting on their work, while others denounced such practices. Some stressed the importance of maintaining transparency with students regarding generative AI use, while others opted to conceal their usage due to student wariness about technology.
Nevertheless, most felt that Stapleton’s experience at Northeastern—where her professor appeared to use AI for generating class notes and slides—was unjustifiable. That was Dr. Schoblin’s view, provided the professor edited the AI outputs to fit his expertise. He likened it to the longstanding practice in academia of utilizing content from third-party publishers, such as lesson plans and case studies.
Professors using AI for slide generation are considered “some sort of monsters.” “It’s absurd to me,” he remarked.
Steroid calculator
Christopher Kwaramba, a business professor at Virginia Commonwealth University, referred to ChatGPT as a time-saving partner. He mentioned that lesson plans that once required days to create could now be completed in mere hours. He employs it to generate datasets for fictional retail chains used in exercises designed to help students grasp various statistical concepts.
“I see it as the age of steroid calculators,” Dr. Kwaramba stated.
Dr. Kwaramba noted that support hours for students are increasing.
Conversely, other professors, such as Harvard’s David Malan, reported that AI diminished student attendance during office hours. Dr. Malan, a computer science professor, integrated a custom AI chatbot into his popular introductory programming course, allowing hundreds of students access for assistance with coding assignments.
Dr. Malan had to refine his approach to ensure that chatbots only offer guidance, not complete answers. Most of the 500 students surveyed in 2023 found the resource beneficial, particularly in its inaugural year.
By freeing up common inquiries about referral materials during office hours, Dr. Malan and his teaching assistant can now focus on meaningful interactions with students, like weekly lunches and hackathons. “These are more memorable moments and experiences,” Dr. Malan reflected.
Katy Pearce, a communications professor at the University of Washington, developed a tailored AI chatbot trained on prior assignments she assessed, enabling students to receive feedback on their writing mimicking her style at any hour, day or night. This is particularly advantageous for those hesitant to seek help.
“Can we foresee a future where many graduate teaching assistants might be replaced by AI?” she pondered. “Yes, absolutely.”
What implications would this have on the future pipeline for professors emerging from the Teaching Assistant ranks?
“That will undoubtedly pose a challenge,” Dr. Pearce concluded.
The moment you are taught
After filing her complaint with Northeastern, Stapleton participated in several meetings with business school officials. In May, the day after graduation, she learned that her tuition reimbursement wouldn’t be granted.
Her professor, Rick Arrowwood, expressed regret about the incident. Dr. Arrowwood, an adjunct with nearly two decades of teaching experience, spoke about using class materials, claiming that AI tools provided a “fresh perspective” on ChatGPT, search engine confusion, and presentation generators labeled Gamma. Initially, he mentioned that the outputs appeared impressive.
“In hindsight, I wish I had paid closer attention,” he commented.
While he shared materials online with students, he clarified that he had not used them during class sessions, only recognizing the errors when school officials inquired about them.
This awkward episode prompted him to understand that faculty members must be more cautious with AI and be transparent with students about its usage. Northeastern recently established an official AI policy that mandates attribution every time an AI system is employed and requires a review of output for “accuracy and quality.” A Northeastern spokesperson stated that the institution aims to “embrace the use of artificial intelligence to enhance all facets of education, research, and operations.”
“I cover everything,” Dr. Arrowwood asserted. “If my experience can serve as a learning opportunity for others, then that’s my happy place.”
Source: www.nytimes.com
