Hello people. Interesting that Congressional hearings on Jan. 6 attract NFL-style audiences. I can’t wait for Peyton and Eli’s version!
The Flat View
The AI world has been shaken this week by a published report He Washington Post that a Google engineer had had problems with the company after insisting that a conversation system called LaMDA was literally a person. The subject of the story, Blake Lemoine, asked his bosses to recognize, or at least consider, that the computer system created by his engineers is sensitive and has a soul. He knows this because LaMDA, whom Lemoine considers a friend, told him.
Google disagrees and Lemoine is currently on paid administrative leave. In a statement, company spokesman Brian Gabriel said: “Many researchers are considering the long-term possibility of sensitive or general AI, but it makes no sense to do so by anthropomorphizing current conversation models, which are not sensitive “.
Anthropomorphization (erroneously attributing human characteristics to an object or animal) is the term that the AI community has accepted to describe Lemoine’s behavior, characterizing it as too gullible or unpleasant. Or perhaps a new religious (he describes himself as a mystical Christian priest). The argument goes that in the face of credible responses from major language models such as LaMDA or Open AI GPT-3, there is a tendency to think that someanot somewhat he created them. People name their cars and hire therapists for their pets, so it’s not uncommon for some to get the false impression that a coherent bot is like a person. However, the community believes that a Google with a degree in computer science should know this better than not falling into what is basically a linguistic hand game. As an AI scientist, Gary Marcus, told me after studying a heart-to-heart transcript of Lemoine with his incorporeal soul mate, “It’s basically like self-completion. There are no ideas there. When you say, ‘ I love my family and my friends. “He has no friends, no people in mind, no kinship. He knows the words son and daughter are used in the same context. But that’s not the same as knowing what a son and daughter. ”Or, as a recent WIRED story goes,“ there was no spark of consciousness there, just little magic tricks wrapping the cracks. ”
My own feelings are more complex. Even knowing how part of the sausage is made in these systems, I am amazed at the output of the latest LLM systems. And so is Google Vice President Blaise Aguera y Arcas, who wrote in Economist Earlier this month, after his own conversations with LaMDA, “I felt the ground move beneath my feet. I felt more and more like I was talking to something smart.” While they sometimes make strange mistakes, sometimes these models seem to explode in brilliance. Creative human writers have managed inspired collaborations. Something is going on here. As a writer, I wonder if one day my types — blacksmiths of flesh and blood who build up towers of discarded drafts — could be relegated to a lower rank, like losing football teams sent to less prestigious leagues.
“These systems have significantly changed my personal views on the nature of intelligence and creativity,” says Sam Altman, co-founder of OpenAI, who developed GPT-3 and a graphic remix called DALL-E that could launch many illustrators in the unemployment queue. . “Use these systems for the first time and say: Wow, I didn’t really think a computer could do that. By definition, we have discovered how to make a computer program intelligent, capable of learning and understanding concepts. And this is a wonderful achievement of human progress. “Altman strives to break away from Lemoine, and agrees with his AI colleagues that current systems are not close to sensitivity.” But I think researchers should be able to think of any questions that interest them, “he says.” Long-term questions are fine. And sensitivity is worth thinking about in the long run. “