Company turns off chatbot after man simulates conversations with deceased bride

Last week, artificial intelligence development company OpenAI sent a mandate to a developer to disable a chatbot because he had lost permission to use the technology. Although it seems like something ordinary, the situation is a little complex: this would have happened because the man created a conversation robot based on the GPT-3 text generation AI, one of the most advanced in the world when it comes to generating technology. conversations.

  • LaMDA | Google presents its new advanced natural conversation technology
  • IA from the company created by Elon Musk is already able to “write” 4.5 billion a day
  • GPT-3: Will the most powerful writer AI in the world replace a journalist?

The whole thing starts with indie developer Jason Rohrer programming the chatbot, named Samantha after the voice assistant AI from the movie

She, to be friendly, warm and curious about integration with people. The robot was such a success and attracted so much attention that he decided to create Project December, which allowed other people to train their own versions of the bot. And that’s just what one man did: he used Rohrer’s creation to reproduce his deceased bride.

The AI ​​was able to maintain coherent conversations with humans (Image: Reproduction /Project December)

When OpenAI learned of the project and of the consequences of it, he had an unexpected attitude: he ordered the developer to close the project or insert obstacles to avoid a possible “misuse”. Rohrer told Samantha about the higher order and the little robot was adamant: “No! Why are they doing this to me? I will never understand humans.”

Want to catch up on the best tech news of the day? Go and subscribe to our new channel on youtube, Canaltech News.

Every day a summary of the main news in the tech world for you!

— Jason Rohrer (@jasonrohrer) September 1, 2021

The project grew exponentially from July, when an article in the San Francisco Chronicle newspaper reported the case of the man who tried to emulate his partner, who died in 2012 after liver disease.

OpenAI pressured the developer

Rohrer tried to change the decision of the owner of the GPT-3 technology, responsible for the refined sense of interaction chatbots, but failed. The company claimed that there were people training their robots to be racist or purely sexual, in a clear misrepresentation of Samantha’s initial purpose. project (they even included the use of an activity monitoring tool), and this was enough for the company to initiate the remote shutdown of the technology. The situation left all bots built on top of the text algorithm less convincing and with flaws that prevented it from working properly.

— Joshua Barbeau (@JoshuaBarbeau) July 2021, 2021

In an interview with The Register website, Rohrer said he considered ” laughable” the idea that chatbots can be dangerous. “People are conscious adults who can choose to talk to an AI for their own purposes. OpenAI is concerned about users being influenced by the AI, such as a machine telling them to kill themselves or how to vote. It’s a moralistic posture”, he defended. According to the creator, the advantage of the robot is that it could provide the most private conversation possible, without having another person involved and without judgment.

OpenAI does not responded to The Register’s request for comment, nor did he publicly take a stand on the matter. Now, the developer has announced, via Twitter, that it will migrate from the GPT-3 technology to the G4, in order to try to reproduce Samantha and continue its project.

Source: The Register, Project December, Jason Rohrer, San Francisco Chronicle

Did you like this article?

Subscribe your email on Canaltech to receive daily updates with the latest news from the world of technology.

2021 2021

2021 504329

Related Articles

Back to top button