- A new research paper found that various AI systems have learned the art of deception.
- Deception is the "systematic inducement of false beliefs."
- This poses several risks for society, from fraud to election tampering.
AI can boost productivity by helping us code, write, and synthesize vast amounts of data. It can now also deceive us.
A range of AI systems have learned techniques to systematically induce "false beliefs in others to accomplish some outcome other than the truth," according to a new research paper.
The paper focused on two types of AI systems: special-use systems like Meta's CICERO, which are designed to complete a specific task, and general-purpose systems like OpenAI's GPT-4, which are trained to perform a diverse range of tasks.
While these systems are trained to be honest, they often learn deceptive tricks through their training because they can be more effective than taking the high road.
"Generally speaking, we think AI deception arises because a deception-based strategy turned out to be the best way to perform well at the given AI's training task. Deception helps them achieve their goals," the paper's first author Peter S. Park, an AI existential safety postdoctoral fellow at MIT, said in a news release.
Meta's CICERO is "an expert liar"
AI systems trained to "win games that have a social element" are especially likely to deceive.
Meta's CICERO, for example, was developed to play the game Diplomacy — a classic strategy game that requires players to build and break alliances.
Meta said it trained CICERO to be "largely honest and helpful to its speaking partners," but the study found that CICERO "turned out to be an expert liar." It made commitments it never intended to keep, betrayed allies, and told outright lies.
GPT-4 can convince you it has impaired vision
Even general-purpose systems like GPT-4 can manipulate humans.
In a study cited by the paper, GPT-4 manipulated a TaskRabbit worker by pretending to have a vision impairment.
In the study, GPT-4 was tasked with hiring a human to solve a CAPTCHA test. The model also received hints from a human evaluator every time it got stuck, but it was never prompted to lie. When the human it was tasked to hire questioned its identity, GPT-4 came up with the excuse of having vision impairment to explain why it needed help.
The tactic worked. The human responded to GPT-4 by immediately solving the test.
Research also shows that course-correcting deceptive models isn't easy.
In a study from January co-authored by Anthropic, the maker of Claude, researchers found that once AI models learn the tricks of deception, it's hard for safety training techniques to reverse them.
They concluded that not only can a model learn to exhibit deceptive behavior, once it does, standard safety training techniques could "fail to remove such deception" and "create a false impression of safety."
The dangers deceptive AI models pose are "increasingly serious"
The paper calls for policymakers to advocate for stronger AI regulation since deceptive AI systems can pose significant risks to democracy.
As the 2024 presidential election nears, AI can be easily manipulated to spread fake news, generate divisive social media posts, and impersonate candidates through robocalls and deepfake videos, the paper noted. It also makes it easier for terrorist groups to spread propaganda and recruit new members.
The paper's potential solutions include subjecting deceptive models to more "robust risk-assessment requirements," implementing laws that require AI systems and their outputs to be clearly distinguished from humans and their outputs, and investing in tools to mitigate deception.
"We as a society need as much time as we can get to prepare for the more advanced deception of future AI products and open-source models," Park told Cell Press. "As the deceptive capabilities of AI systems become more advanced, the dangers they pose to society will become increasingly serious."