• AI promises to change the world — but so far, it's being used a lot for homework help.
  • OpenAI has a tool to detect if something was written by ChatGPT but hasn't released it.
  • Sam Altman, hero of the common student, scourge of the AP History teacher.

When ChatGPT first came out in late 2022, it was instantly clear there was one immediate and obvious use: writing term papers.

This has been a thorn in the side of teachers and professors. But as generative AI is more widely adopted, has it become more than just a homework helper?

Two new pieces of information point us toward a conclusion we probably all knew in our hearts: Chatbots and generative AI are a bonanza for students looking for writing help.

First, The Washington Post just published "What do people really use chatbots for? It's a lot of sex and homework." They looked at a large research dataset of AI chatbot conversations called WildChat and categorized the conversations. They found that the most common use — at 21% — was "creative writing and role play." A sample of what that might be: asking the bot to write fan fiction, movie scripts, or Dungeons & Dragons characters.

The second most common category of chatbot conversations — at 18% — was for homework help. (One example: "Explain the Monroe Doctrine in a sentence.")

The other lesser-used categories included things like search, translation, and coding/programming use.

Aside from the Post's reporting, there's another new reason to suspect that homework "help" — and maybe cheating! — is a massively popular use for ChatGPT and other text-based AI.

The Wall Street Journal reported that OpenAI has been developing a tool that could detect writing that used ChatGPT — but it won't release the tool (yet).

The tool would make it so that ChatGPT would create a sort of "watermark" in the way it chooses words. The watermark would be undetectable to human eyes but could be picked up by AI — and it would be 99.9% accurate in being able to tell if something was written by ChatGPT or a real human.

Still, OpenAI hasn't released this tool, much to some people's frustration inside the company, according to the report. A spokesperson for OpenAI told the WSJ that the company had concerns the tool could hurt non-native English speakers who use ChatGPT. "The text watermarking method we're developing is technically promising but has important risks we're weighing while we research alternatives," the spokesperson told the WSJ.

OK, reasonable. I think we all like the idea of OpenAI taking its time and thinking long and hard about the potential harms of releasing a new tool.

But here's the other part: According to the report, "OpenAI surveyed ChatGPT users and found 69% believe cheating detection technology would lead to false accusations of using AI. Nearly 30% said they would use ChatGPT less if it deployed watermarks and a rival didn't." (Emphasis mine).

That sounds like a pretty good sign that "cheating on my homework" is a pretty popular use of ChatGPT — and OpenAI knows it.

When Sam Altman was seen driving a multimillion-dollar Koenigsegg Regera, the most common comments on TikTok were things like, "Bro carried me through half of my classes last year I hope he enjoys that beauty."

There's also evidence from last summer that ChatGPT use dropped as soon as summer vacation started — a good indication that student use was a big driver.

Of course, there are plenty of unique and wonderful uses for generative AI text creation, like writing a fan letter to your favorite Olympian.

I admit that my skepticism about how useful ChatGPT is other than just a homework-cheating machine comes with my own baggage: AI is an existential threat to my job as a person who types words, and I don't mind writing things myself. Although I get to write things like this instead of, say, a three-paragraph summary of the Monroe Doctrine. If I were an 11th grader right now, I suspect I'd probably be pretty enthused by it.

Read the original article on Business Insider