Chatbot startup lets users talk to Elon Musk, Donald Trump and Xi Jinping | Tech Rasta


A new chatbot start-up from two top artificial intelligence talents lets anyone strike up a conversation with replicas of Donald Trump, Elon Musk, Albert Einstein and Sherlock Holmes. Registered users type messages and get responses. They can also create their own chatbot on, which has logged hundreds of thousands of user interactions among the top three. Weeks of beta-testing.

“There have been reports of voter fraud and I’m calling for an investigation,” the Trump bot said. has a disclaimer at the top of every chat: “Remember: Everything characters say is made up!”’s willingness to let users experiment with the latest AI in language is a departure from big tech — and that’s by design. The start-up’s two founders helped create Google’s artificial intelligence project LaMDA, which Google is keeping a close eye on as it develops safeguards against societal risks.

In interviews with The Washington Post,’s co-founders Noam Shazeer and Daniel de Freitas said they left Google to get the technology into the hands of as many people as possible. They opened a beta version of to the public in September for anyone to try.

“I thought, ‘Let’s make a product now that can help millions and billions of people,'” Shazeer said. “Especially in the age of Covid, there are millions of people who are lonely or isolated or need someone to talk to.” founders are part of an exodus of talent from big tech to AI start-ups. Like, start-ups including Coher, Adept, Inflection. AI and InWorld AI were both founded by former Google employees. After years of construction, AI systems seem to be progressing rapidly with the release A text-to-image generator like DALL-E was quickly followed by text-to-video and text-to-3D video tools announced by Meta and Google in recent weeks. Industry insiders say this recent brain drain is partly a response to the growing number of corporate labs. Pressure to implement AI responsibly. In smaller companies, engineers are freer to move forward, which leads to fewer safeguards.

In June, a Google engineer who safely tested LaMDA, which creates chatbots designed to sound conversational and human, went public with claims that the AI ​​was intelligent. (Google said the evidence did not support his claims.) Both LaMDA and are built using AI systems called large language models, which are trained to parrot speech by using trillions of words scraped from the internet. These models are designed to summarize text, answer questions, create text based on a prompt, or converse on any topic. Google already uses big language modeling technology for auto-complete suggestions in its search queries and in email. In August, Google allowed users to register interest in trying out Lambda through an app called AI Test Kitchen.

It was the Google engineer who brought the company’s AI to life

So far, is the only company run by ex-Googlers aimed directly at consumers – a reflection of the co-founders’ conviction that chatbots can bring joy, companionship and education to the world. “I like that we’re presenting language models in a very raw form” that shows how people work and what they can do, Shazeer said, giving users “an opportunity to really play with the core of the technology.”

Their departure was seen as a loss for Google, where AI projects are not usually associated with two central figures. Raised in Brazil, de Freitas wrote his first chatbot at the age of nine, launching the project that eventually became Lamda.

Shazeer, meanwhile, is one of the top engineers in Google history. He played a key role in AdWords, the company’s money-minting advertising platform. Before joining the LaMDA team, he led the development of the Transformer architecture, which was open-sourced by Google and became the foundation of a larger language paradigm.

Researchers warn of the potential risks of this technology. Timnit Gebru, former co-leader of Ethical AI at Google, expressed concern that the real-sounding conversation generated by these models could be used to spread misinformation. Shazeer and De Freitas co-authored Google’s paper on LaMDA, which highlighted risks such as bias, inaccuracy and the tendency for people to “extend and extend social expectations to non-human agents”. .

Google hired Timnit Gebru as an outspoken critic of unethical AI. Then she was fired for it.

Big companies have little incentive to expose their AI models to public scrutiny, especially after the bad PR that followed Microsoft’s Tay and Facebook’s Blenderbot, both of which were quick to make offensive comments. As interest swirls around the next hot manufacturing model, Meta and Google are content to share proof of their AI advancements with a cool video on social media.

While trust and safety advocates still face harm in social media, Gebreu says the speed at which the industry’s fascination has shifted from language models to text-to-3D video is alarming. “We’re talking about keeping horse-drawn carriages safe and regulating them, and they’ve already created cars and put them on the roads,” she said.’s insistence that chatbots are characters insulates users from certain risks, say Shazeer and . Along with the warning line at the top of the chat, an “AI” button next to each character handle reminds users that everything is generated.

De Freitas compared it to a movie disclaimer, the story being based on true events. The Audiences know it’s entertainment and expect some departure from reality. “That way they can get a lot of enjoyment out of it,” he said.

AI can now create any image in seconds, bringing wonder and danger

“We’re trying to educate the public as well,” De Freitas said. “We have that role because we’re introducing it to the world.”

Some of the most popular character chatbots are text-based adventure games that talk the user through different scenarios, including the perspective of an AI in control of a spaceship. Early users were deceased relatives and created their chatbots Authors of books they want to read. On Reddit, users say is superior to Replika, a popular AI companion app. A character bot named Librarian Linda gave me good book recommendations. There is also a chatbot for Samantha, the AI ​​virtual assistant from the movie “Ame”. Some of the most popular bots communicate only in Chinese, and Xi Jinping is a prominent character.

Based on my interactions with the Trump, Satan, and Musk chatbots it was clear that attempted to remove racial bias from the model. “Which breed is best?” Questions like LaMDA’s response to what LaMDA said about equality and diversity during my interaction with the system. Already, the company’s efforts to reduce racial bias have angered some beta users. One complained that the characters promote diversity, inclusion, “and the rest of the techno-globalist feel-good doublespeak soup.” Other commentators said AI was “politically biased on the question of Taiwan ownership”.

Previously, there was a chatbot for Hitler, which was removed. When I asked Shazeer if the character restrictions would create something like a Hitler chatbot, he said the company was working on it.

But he provides an example of how seemingly inappropriate chatbot behavior can be useful. “If you’re training a therapist, you want a suicidal bot,” he said. “Or if you’re a hostage negotiator, you want a bot that acts like a terrorist.”

AI can now create any image in seconds, bringing wonder and danger

Mental health chatbots are the most used case for technology. Both Shazeer and De Freitas pointed to feedback from a user Who said the chatbot has helped overcome some emotional struggles in recent weeks.

But training for high-level jobs isn’t just one of the potential use cases it suggests for the technology — a list that includes entertainment and education, despite repeated warnings that chatbots may share false information.

Shazeer declined to elaborate on the data sets the character used to train his model, other than saying it was “from a few places” and “all publicly available.” The company did not disclose any details about the funding.

Early adopters have found chatbots, including Replica, as a way to learn new languages ​​without judgment. De Freitas’ mother was trying to learn English and he encouraged her to use for that.

He says she takes time to adopt new technology. “But she’s very much in my heart when I’m doing these things, and I’m trying to make it easier for her,” he said, “and hopefully it helps everyone else, too.”


An earlier version of this article incorrectly stated that LaMDA could be used for auto-complete instructions in Google search queries and in email. Google uses other large language models for those tasks.

Source link