• Universities across the globe are scrambling to draft AI policies before the new academic term.
  • But some educators say the real threat isn't AI, but a "lagging and outdated approach to education."
  • These educators say that schools can't avoid talking about ChatGPT in the classroom.

As the new school year approaches, universities are scrambling to draft new policies for a post-ChatGPT classroom.

Some schools have banned the tool entirely, over fears it would enable AI-assisted plagiarism. Some professors have opted to return to paper exams to fight students using ChatGPT, while one university even resorted to springing surprise checks on students suspected of using the tool.

Madison White, a student at Stetson University in Florida, told The Wall Street Journal about professors' widespread fears of AI-assisted cheating. "They often immediately assumed that it was a hack for students to get away from doing readings or homework," she said.

Insider spoke to six educators at universities based in Singapore, the Netherlands, Hong Kong, and the United States. Four of them said that fears over the use of AI in classrooms were overblown, and all six said schools can't afford to shut ChatGPT out of classrooms entirely.

One of them, Ian Chong, an associate professor of international relations at the National University of Singapore, said that he wasn't losing sleep about AI seeping into education.

"I like to tell my students if they use AI to write, I'll use AI to give comments and write their recommendation letters," he said. 

Schools shouldn't overly on AI detection tools, the educators said

The concern over AI-assisted cheating has fueled a demand for tools to detect AI-generated content, but the would-be solution has introduced a new set of problems.

Rebecca Tan, a political science lecturer at the National University of Singapore, told Insider AI detection tools can be "notoriously inaccurate."

As of July, plagiarism-checking software Turnitin's AI detection tool had been used to review over 65 million papers since its launch in April — despite concerns over its accuracy. Also in July, ChatGPT creator OpenAI quietly shut down its AI detection tool over concerns about its inaccuracy.

Instead of relying on AI detection tools, educators need to get innovative as AI tools become ubiquitous — through ideas like having students submit the introductions to their essays first, Tan said.

"It's short so students don't feel like they need to use ChatGPT, but it gives me a point of reference when I'm marking their longer essay to see if the final product deviates greatly from the initial introduction," she said of the tactic.

Educators are also rethinking how students are assessed. Chong said he's using more "complex, scenario-based" exercises with students, for which ChatGPT's responses remain inadequate. Michael Rivera, a lecturer at Hong Kong University's history department, recommends using live discussions and reflections as assessments.

But, "pursuing such an approach is only realistic if class sizes are kept small enough to pay personal attention to everyone and provide detailed feedback throughout the school term," said Shannon Ang, an assistant professor of sociology at Singapore's Nanyang Technological University, who was critical of the pressures faced by universities to expand class sizes in the name of "efficiency."

Plus, an emphasis on getting around AI might be missing the point.

Schools can't afford to ignore teaching with AI

Educators Insider spoke to agreed that universities shouldn't shut ChatGPT out of classrooms entirely, as ignoring the chatbot might do more harm than good.

"What I do, for example, is to let students use AI tools to answer certain research questions, and then compare it to Wikipedia — their sort of second love in the pre-AI days — before comparing it to their textbooks," said Kai Jonas, a professor of applied social psychology at Maastricht University in the Netherlands.

Jonas said the key is to embrace AI tools in the classroom and teach students how to fact-check ChatGPT's responses.

"We've already seen cases of AI generating false sources of information, and this would be a useful example to interrogate," said Joana Cook, an assistant professor of political violence at Netherlands' Leiden University, currently lecturing at Johns Hopkins University.

The need to vet information from AI stems from the technology's ability to "hallucinate," or present wrong information as fact. High-profile instances of this include the technology making up fake court cases to reference — leading to a law firm receiving a $5000 fine — and retractions by media outlets over errors made by AI.

The real threat to education isn't AI, it's boring lessons

When asked about threats to education, Nanyang Technological University's Ang said: "AI tools are not the threat — a lagging and outdated approach to education is."

Educators who take the time to provide thoughtful feedback are likelier to see engaged students who do the work, as students understand that "it is pointless to receive feedback on work that is not your own," said Ang.

That's because using AI tools like ChatGPT as a shortcut hints at students being unengaged or swamped, and thus not seeing the value of doing the work, echoed the National University of Singapore's Tan.

It takes work from professors and educators to overcome inertia and adapt to the technology while meeting students' needs, Jonas said: "I can see that some of them are hesitant to do so because they have to revise their curricula and teaching materials."

"I believe it is 100% the success or the failure of the teacher if students don't know how to use AI tools strategically in their education," said Hong Kong University's Rivera.

Read the original article on Business Insider