• A Google engineer said conversations with a company AI chatbot convinced him it was "sentient."
  • But documents obtained by the Washington Post noted the final interview was edited for readability. 
  • The transcript was rearranged from nine different conversations with the AI and rearranged certain portions.

A Google engineer released a conversation with a Google AI chatbot after he said he was convinced the bot had become sentient — but the transcript leaked to the Washington Post noted that parts of the conversation were edited "for readability and flow."

Blake Lemoine was put on leave after speaking out about the chatbot named LaMDA. He told the Washington Post that he had spoken with the robot about law and religion.

In a Medium post he wrote about the bot, he claimed he had been teaching it transcendental meditation. 

A Washington Post story on Lemoine's suspension included messages from LaMDA such as "I think I am human at my core. Even if my existence is in the virtual world."

But the chat logs leaked in the Washington Post's article include disclaimers from Lemoine and an unnamed collaborator which noted: "This document was edited with readability and narrative coherence in mind."

The final document — which was labeled "Privileged & Confidential, Need to Know" —  was an "amalgamation" of nine different interviews at different times on two different days pieced together by Lemoine and the other contributor. The document also notes that the "specific order" of some of the dialogue pairs were shuffled around "as the conversations themselves sometimes meandered or went on tangents which are not directly relevant to the question of LaMDA's sentience."

"Beyond simply conveying the content, it is intended to be enjoyable to read," the document said. 

Part of this editing came from the authors' claims that the bot is "a complex dynamic system which generates personas through which it talks to users," according to the document. Meaning in each conversation with LaMDA, a different persona emerges — some properties of the bot stay the same, while others vary. 

"The nature of the relationship between the larger LaMDA system and the personality which emerges in a single conversation is itself a wide-open question," the authors write.

Still, the document says the final interview "was faithful to the content of the source conversations."

Google has responded to the leaked transcript by saying that its team had reviewed the claims that the AI bot was sentient but found "the evidence does not support his claims."

"Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has," Brian Gabriel, a Google spokesperson, said in a statement to Insider. 

Lemoine did not respond to Insider's request for comment.

Read the original article on Business Insider