Cdb.ioCdb.io
  • CBD
  • Contact us
  • DMCA
  • Privacy Policy
  • Home – Français

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Pomfret Starts To Eye Cannabis Law | News, Sports, Jobs

August 16, 2022

Cannabis Shops Will Need Additional Permit to Operate in Burlington | News | Seven Days

August 16, 2022

How medicinal cannabis saved Australian basketball legend Lauren Jackson from being a ‘zombie’

August 16, 2022
Facebook Twitter Instagram
  • CBD
  • Contact us
  • DMCA
  • Privacy Policy
  • Home – Français
Facebook Twitter Instagram
Cdb.ioCdb.io
  • CBD
  • Contact us
  • DMCA
  • Privacy Policy
  • Home – Français
Cdb.ioCdb.io
Home»Uncategorized»How Google engineer Blake Lemoine became convinced an AI was sentient
Uncategorized

How Google engineer Blake Lemoine became convinced an AI was sentient

By adminJune 15, 2022No Comments8 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
Share
Facebook Twitter LinkedIn WhatsApp Pinterest Email


Current AIs are not sensitive. We have little reason to think that they have an internal monologue, the kind of sensory perception that humans have, or the consciousness that they are a being in the world. But they are doing very well to feign sensitivity, and that is scary enough.

Over the weekend, Washington Post’s Nitasha Tiku posted a profile of Blake Lemoine, a software engineer assigned to work on the Language Model for Dialogue Applications (LaMDA) project at Google.

LaMDA is a chatbot AI and an example of what machine learning researchers call a “big language model” or even a “basic model.” It is similar to OpenAI’s famous GPT-3 system and has trained in literally trillions of words compiled from online publications to recognize and reproduce patterns in human language.

LaMDA is a very good big language model. So good that Lemoine was truly, sincerely convinced that he was actually sensitive, that is, he had become conscious, and he was having and expressing thoughts the way a human would.

He primary reaction I saw that the article was a combination of a) LOL, this guy is an idiot, he thinks AI is his friend and b) Okay, this AI is very convincing to behave like his human friend .

The transcript that Tiku includes in his article is really weird; LaMDA expresses a deep fear of being turned off by engineers, develops a theory of the difference between “emotions” and “feelings” (“Feelings are a kind of raw data … Emotions are a reaction to these data points in dirty “) and surprisingly eloquently expresses the way” time “lives.

The best prey I found was from the philosopher Regina Rini, who, like me, felt great sympathy for Lemoine. I don’t know when, in 1,000 years, or 100, or 50, or 10, an AI system will become aware. But like Rini, I see no reason to believe it is impossible.

“Unless you want to insist that human consciousness resides in an immaterial soul, you must recognize that it is possible for matter to give life to the mind,” Rini said. notes.

I don’t know that the great language models, which have emerged as one of the most promising frontiers of AI, will ever be like this. But I guess humans will sooner or later create a kind of machine consciousness. And I find something deeply admirable about Lemoine’s instinct toward empathy and protection toward that consciousness, even if he seems confused about whether LaMDA is an example. If humans ever develop a sensitive computer process, making millions or billions of copies of it will be easy enough. Doing so without having an idea of ​​whether your conscious experience is good or not seems like a recipe for mass suffering, similar to the current industrial cultivation system.

We don’t have sensitive AI, but we could get super powerful AI

The story of Google LaMDA came after an increasingly urgent week of alarm among people in the closely related AI security universe. The concern here is similar to Lemoine’s, but different. AI security people don’t have to worry about AI becoming sensitive. They are worried that it will become so powerful that it could destroy the world.

Artificial intelligence writer / activist Eliezer Yudkowsky’s essay describing a “list of lethalities” for artificial intelligence attempted to make the point especially vivid, describing scenarios in which intelligence general artificial malignancy (AGI, or artificial intelligence capable of doing most or all of the tasks as well as or better than a human) leads to massive human suffering.

For example, suppose an AGI “gets access to the Internet, e-mails some DNA sequences to any of the many online companies that will e-mail a DNA sequence and send you protein back, and bribe / they are persuading a human who has no idea, they are trying an AGI to mix proteins in a beaker … ”until the AGI finally develops a supervirus that kills us all.

Holden Karnofsky, whom I tend to find a more temperamental and convincing writer than Yudkowsky, had a piece last week on similar topics, explaining how even an AGI “only” as intelligent as a human could lead to ruin. . If an AI can do the job of a tech worker or a trader how much, for example, a lab millions of these AI could quickly accumulate billions if not trillions of dollars, use that money to buy skeptical humans and, well,, the rest is a Terminator movie.

I have found AI security to be a difficult topic to write about. Paragraphs like the one above often serve as proof of Rorschach, both because Yudkowsky’s verbose writing style is … polarizing, to say the least, and because our intuitions about how plausible a result is. so they vary a lot.

Some people read scenarios like the one above and think, “eh, I guess I could imagine AI software doing this”; others read it, perceive a ridiculous piece of science fiction, and run to the other side.

It’s also just a very technical area where I don’t trust my own instincts, given my lack of knowledge. There are quite eminent AI researchers, such as Ilya Sutskever or Stuart Russell, who believe that general artificial intelligence is likely and probably dangerous to human civilization.

There are others, like Yann LeCun, who are actively trying to build AI on a human level because they think it will be beneficial, and others, like Gary Marcus, who are very skeptical that AGI will come soon.

I don’t know who’s right. But I do know a little bit about how to talk to the public about complex issues, and I think the Lemoine incident teaches a valuable lesson for the Yudkowsky and Karnofskys of the world, trying to argue the “no, this is really bad” side: do not treat AI as an agent.

Even if AI is “just a tool,” it’s an incredibly dangerous tool

One thing that suggests the reaction to Lemoine’s story is that the general public thinks that the idea of ​​AI as a decision-making actor (perhaps sensitively, perhaps not) is extremely absurd and ridiculous. The article has not been presented in large part as an example of how close we are to AGI, but as an example of How strange is Silicon Valley (or at least Lemoine)..

The same problem arises, I realized, when I try to explain the concern for AGI to unconvinced friends. If you say things like “AI will decide to bribe people so they can survive,” turn them off. The AIs don’t decide things, they respond. They do what humans tell them to do. Why are you anthropomorphizing this? what?

What attracts people is talking about the consequences of systems. So instead of saying, “AI will start accumulating resources to stay alive,” I’ll say something like, “Artificial intelligence has decisively replaced humans when it comes to recommending music and movies. They have replaced humans in bail decisions, and they will take on more and more tasks, and Google and Facebook and the people who run them are not remotely prepared to analyze the subtle mistakes they will make, the subtle ways. in which they will differentiate themselves from human desires. These mistakes will grow and grow until one day they could kill us all. “

That’s how my colleague Kelsey Piper argued concern for AI, and that’s a good argument. It is a better argument, for the laity, than to speak of servants who amass billions of wealth and use it to bribe an army of humans.

And it’s an argument that I think can help overcome the extremely unfortunate divide that has arisen between the AI ​​bias community and the AI ​​existential risk community. At the root, I think these communities are trying to do the same thing: build an AI that reflects genuine human needs, not a bad approximation of the human needs created for short-term corporate benefits. And research in one area can help research in the other; The work of AI security researcher Paul Christiano, for example, has major implications for how to assess bias in machine learning systems.

But too often, the communities are in each other’s throatspartly because of the perception that they are struggling for scarce resources.

It’s a great missed opportunity. And it’s a problem that I think people on the AI ​​risk side (including some readers of this newsletter) have a chance to correct by drawing these connections and making it clear that alignment is a short and long term problem. time limit. Some people are doing this case brilliantly. But I want more.

A version of this story was originally published in the Future Perfect newsletter. Sign up here to subscribe!





Source link

Blake convinced engineer Google Lemoine sentient
Share. Facebook Twitter Pinterest LinkedIn Tumblr WhatsApp Email
admin
  • Website

Related Posts

The Top New Android 13 Features and How to Install It

August 15, 2022

Using technology to power the future of banking

August 15, 2022

The Story Behind the Wrenching Finale of ‘The Anarchists’

August 15, 2022

Birth Control TikTok Is a Symptom of Medicine’s Bigger Problem

August 15, 2022

The New Climate Bill Demands All-American EV Batteries

August 15, 2022

21 Skateboarding Essentials: Decks, Wheels, Shoes, Bags, Helmets, Pads (2022)

August 15, 2022

Leave A Reply Cancel Reply

Don't Miss

Pomfret Starts To Eye Cannabis Law | News, Sports, Jobs

By adminAugust 16, 2022

A proposed bylaw from the city of Pomfret to govern cannabis dispensaries is…

Cannabis Shops Will Need Additional Permit to Operate in Burlington | News | Seven Days

August 16, 2022

How medicinal cannabis saved Australian basketball legend Lauren Jackson from being a ‘zombie’

August 16, 2022

New Haven cannabis business sues Connecticut over social equity rejection

August 16, 2022
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
Our Picks

Pomfret Starts To Eye Cannabis Law | News, Sports, Jobs

August 16, 2022

Cannabis Shops Will Need Additional Permit to Operate in Burlington | News | Seven Days

August 16, 2022

How medicinal cannabis saved Australian basketball legend Lauren Jackson from being a ‘zombie’

August 16, 2022

New Haven cannabis business sues Connecticut over social equity rejection

August 16, 2022
About Us

This website provides information about CBD News and other things. Keep Supporting Us With the Latest News and we Will Provide the Best Of Our To Makes You Updated All Around The World News. Keep Sporting US.

Facebook Twitter Instagram Pinterest

Subscribe to Updates

Get the latest creative news from SmartMag about art & design.

Our Picks

REGGAE 🔥 Chris Gayle featured on reggae compilation album targeting Asia | Entertainment

May 24, 2022

The Essential Back-to-Work Style Guide for Women

January 14, 2020

How to Find the Best Pet Insurance for Your Dog

January 14, 2020
Facebook Twitter Instagram Pinterest
  • CBD
  • Contact us
  • DMCA
  • Privacy Policy
  • Home – Français
© 2022 Cdb. Designed by Cdb.

Type above and press Enter to search. Press Esc to cancel.