<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Google’s AI LaMDA Archives - Good Shepherd News - Fastest Growing Religious, Free Speech &amp; Political Content</title>
	<atom:link href="https://goodshepherdmedia.net/tag/googles-ai-lamda/feed/" rel="self" type="application/rss+xml" />
	<link>https://goodshepherdmedia.net/tag/googles-ai-lamda/</link>
	<description>Christian, Political, ‎‏‏‎Social &#38; Legal Free Speech News &#124; Ⓒ2024 Good News Media LLC &#124; Shepherd for the Herd! God 1st Programming</description>
	<lastBuildDate>Sat, 28 Jan 2023 10:06:00 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.1</generator>

 
	<item>
		<title>Google’s AI Passed a Famous Turing Test</title>
		<link>https://goodshepherdmedia.net/googles-ai-passed-a-famous-turing-test/</link>
		
		<dc:creator><![CDATA[The Truth News]]></dc:creator>
		<pubDate>Wed, 29 Jun 2022 06:39:08 +0000</pubDate>
				<category><![CDATA[Science & Engineering]]></category>
		<category><![CDATA[Tech]]></category>
		<category><![CDATA[Top Stories]]></category>
		<category><![CDATA[Zee Truthful News]]></category>
		<category><![CDATA[🤖 AI Artificial Intelligence]]></category>
		<category><![CDATA[🤖🖥️LaMDA]]></category>
		<category><![CDATA[Google]]></category>
		<category><![CDATA[Google’s AI]]></category>
		<category><![CDATA[Google’s AI LaMDA]]></category>
		<category><![CDATA[Google’s AI passed a famous test]]></category>
		<category><![CDATA[Google’s AI passed Turing Test]]></category>
		<category><![CDATA[Google’s LaMDA]]></category>
		<category><![CDATA[LaMDA]]></category>
		<category><![CDATA[machine intelligence]]></category>
		<category><![CDATA[Turing test]]></category>
		<guid isPermaLink="false">https://goodshepherdmedia.net/?p=10121</guid>

					<description><![CDATA[Google’s AI passed a famous test — and showed how the test is broken The Turing test has long been a benchmark for machine intelligence. But what it really measures is deception. In 1950, the ingenious computer scientist Alan Turing proposed a thought experiment he called the Imitation Game. An interviewer converses via typewriter with [&#8230;]]]></description>
										<content:encoded><![CDATA[<div>
<div class="w-100">
<h1 id="main-content" class="font--headline offblack headline mb-xs pb-xxs-ns" style="text-align: center;" data-testid="headline" data-qa="headline"><span data-qa="headline-text">Google’s AI passed a famous test — and showed how the test is broken</span></h1>
</div>
<h2 class="font--subhead font-light offblack mb-sm pb-xxs-ns subheadline" style="text-align: center;" data-qa="subheadline">The Turing test has long been a benchmark for machine intelligence. But what it really measures is deception.</h2>
<div class="flex print-byline print-mt-none">
<div class="byline-wrapper flex-column flex">
<div class="mb-xxs" data-qa="author-byline">
<div class="flex items-center" data-qa="author-byline">
<div class="mr-sm lh-0"></div>
</div>
</div>
</div>
</div>
</div>
<div>In 1950, the ingenious computer scientist Alan Turing proposed a thought experiment he called the Imitation Game. An interviewer converses via typewriter with two subjects, knowing one is human and the other a machine. If a machine could consistently fool the interviewer into believing it was the human, Turing suggested, we might speak of it as capable of something like thinking.</div>
<div>Tech is not your friend. We are. Sign up for The Tech Friend newsletter.</div>
<p>&nbsp;</p>
<div>Whether machines could actually think, Turing believed, was a question “too meaningless to deserve discussion.” Nonetheless, the “Turing test” became a benchmark for machine intelligence. Over the decades, various computer programs vied to pass it using cheap conversational tricks, with some success.</div>
<div></div>
<div>In recent years, wealthy tech firms including Google, Facebook and OpenAI have developed a new class of computer programs known as “large language models,” with conversational capabilities far beyond the rudimentary chatbots of yore. One of those models — Google’s LaMDA — has convinced Google engineer Blake Lemoine that it is not only intelligent but conscious and sentient.</div>
<div></div>
<div>If Lemoine was taken in by LaMDA’s lifelike responses, it seems plausible that many other people with far less understanding of artificial intelligence, AI, could be as well — which speaks to its potential as a tool of deception and manipulation, in the wrong hands.</div>
<div>To many in the field, then, LaMDA’s remarkable aptitude at Turing’s Imitation Game is not an achievement to be celebrated. If anything, it shows that the venerable test has outlived its use as a lodestar for artificial intelligence.</div>
<div></div>
<div>“These tests aren’t really getting at intelligence,” said Gary Marcus, a cognitive scientist and co-author of the book “Rebooting AI.” What it’s getting at is the capacity of a given software program to pass as human, at least under certain conditions. Which, come to think of it, might not be such a good thing for society.</div>
<div></div>
<div>“I don’t think it’s an advance toward intelligence,” Marcus said of programs like LaMDA generating humanlike prose or conversation. “It’s an advance toward fooling people that you have intelligence.”</div>
<div></div>
<div>Lemoine may be an outlier among his peers in the industry. Both Google and outside experts on AI say that the program does not, and could not possibly, possess anything like the inner life he imagines. We don’t need to worry about LaMDA turning into Skynet, the malevolent machine mind from the Terminator movies, anytime soon.</div>
<div></div>
<div>But there is cause for a different set of worries, now that we live in the world Turing predicted: one in which computer programs are advanced enough that they can seem to people to possess agency of their own, even if they actually don’t.</div>
<div></div>
<div>Cutting-edge artificial intelligence programs, such as OpenAI’s GPT-3 text generator and image generator DALL-E 2, are focused on generating uncannily humanlike creations by drawing on immense data sets and vast computing power. They represent a far more powerful, sophisticated approach to software development than was possible when programmers in the 1960s gave a chatbot called ELIZA canned responses to various verbal cues in a bid to hoodwink human interlocutors. And they may have commercial applications in everyday tools, such as search engines, autocomplete suggestions, and voice assistants like Apple’s Siri and Amazon’s Alexa.</div>
<div></div>
<div></div>
<div>It’s also worth noting that the AI sector has largely moved on from using the Turing test as an explicit benchmark. The designers of large language models now aim for high scores on tests such as the General Language Understanding Evaluation, or GLUE, and the Stanford Question Answering Dataset, or SQuAD. And unlike ELIZA, LaMDA wasn’t built with the specific intention of passing as human; it’s just very good at stitching together and spitting out plausible-sounding responses to all kinds of questions.</div>
<div></div>
<div>Yet beneath that sophistication, today’s models and tests share with the Turing test the underlying goal of producing outputs that are as humanlike as possible. That “arms race,” as the AI ethicist Margaret Mitchell called it in a Twitter Spaces conversation with Washington Post reporters on Wednesday, has come at the expense of all sorts of other possible goals for language models. Those include ensuring that their workings are understandable and that they don’t mislead people or inadvertently reinforce harmful biases. Mitchell and her former colleague Timnit Gebru were fired by Google in 2021 and 2020, respectively, after they co-authored a paper highlighting those and other risks of large language models.</div>
<div></div>
<blockquote>
<h3 style="text-align: center;"><span style="color: #ff0000;"><strong><em>[Google fired its star AI researcher one year ago. Now she’s launching her own institute.]</em></strong></span></h3>
</blockquote>
<div></div>
<div>While Google has distanced itself from Lemoine’s claims, it and other industry leaders have at other times celebrated their systems’ ability to trick people, as Jeremy Kahn pointed out this week in his Fortune newsletter, “Eye on A.I.” At a public event in 2018, for instance, the company proudly played recordings of a voice assistant called Duplex, complete with verbal tics like “umm” and “mm-hm,” that fooled receptionists into thinking it was a human when it called to book appointments. (After a backlash, Google promised the system would identify itself as automated.)</div>
<div></div>
<div>“The Turing Test’s most troubling legacy is an ethical one: The test is fundamentally about deception,” Kahn wrote. “And here the test’s impact on the field has been very real and disturbing.”</div>
<div></div>
<div>Kahn reiterated a call, often voiced by AI critics and commentators, to retire the Turing test and move on. Of course, the industry already has, in the sense that it has replaced the Imitation Game with more scientific benchmarks.</div>
<div></div>
<div>But the Lemoine story suggests that perhaps the Turing test could serve a different purpose in an era when machines are increasingly adept at sounding human. Rather than being an aspirational standard, the Turing test should serve as an ethical red flag: Any system capable of passing it carries the danger of deceiving people.</div>
<div></div>
<div></div>
<div>
<div>
<div class="flex print-byline print-mt-none">
<div class="byline-wrapper flex-column flex">
<div class="mb-xxs" data-qa="author-byline">
<div class="flex items-center" data-qa="author-byline">
<div class="mr-sm lh-0">
<div class="PJLV PJLV-ipmrKX-css"><img decoding="async" class="mw-100 h-auto brad-50" src="https://www.washingtonpost.com/wp-apps/imrs.php?src=https://s3.amazonaws.com/arc-authors/washpost/89ebdd6c-a27a-40b6-9b15-1605fd2e493e.png&amp;w=56&amp;h=56" /><a href="https://www.washingtonpost.com/technology/2022/06/17/google-ai-lamda-turing-test/" target="_blank" rel="noopener">source</a></div>
</div>
<div class="flex">
<div class="dib font-xxs mb-xxs" data-qa="name-with-optional-link" data-cy="name-with-optional-link"><span class="gray-darkest" data-qa="attribution-text">Analysis by </span><a class="gray-darkest hover-gray-dark decoration-gray-dark underline hover-none decoration-1 underline-offset-1" href="https://www.washingtonpost.com/people/will-oremus/" rel="author" data-qa="author-name">Will Oremus</a></div>
</div>
</div>
</div>
</div>
</div>
</div>
</div>
<div>
<hr />
<h1 class="u-entryTitle" style="text-align: center;">How an AI managed to confuse humans in an imitation game</h1>
<blockquote>
<p class="Article-excerpt" style="text-align: center;"><em>Here’s how “hints of humanness” may have come into play in the Italian experiment.</em></p>
</blockquote>
</div>
<div>
<div class="Article-bodyText">
<p>Researchers from the Italian Institute of Technology have trained an artificially intelligent computer program to pass a nonverbal Turing test. By behaving like humans would, such as changing up its reaction times while participating in an interactive shape and color-matching game, the AI was able to hide its true identity from people. The related study was published this week in the journal <a href="http://www.science.org/doi/10.1126/scirobotics.abo1241" target="_blank" rel="noopener"><em>Science Robotics</em></a>. A <a href="https://www.popsci.com/who-was-alan-turing/" target="_self" rel="noopener">Turing</a> test is a standard used by computer scientists to determine whether a machine can display a convincing enough level of human-like intelligence to fool another human into believing that it too, could be human. This is done usually through a version of “the imitation game.” The game works like this: There are three players in total. One player is unable to see the other two players but can communicate with them by asking questions or interacting with them to determine which of the two players is human and which is machine. A computer passes the Turing test when the human spectator can’t distinguish between the answers of the human versus the computer.  “Given how our brains are tuned to detect very subtle and implicit signals of human-like behavior, it is worth asking whether a humanoid robot can ever pass the nonverbal version of the Turing test by embodying human characteristics in its physical actions,” the researchers wrote in the study. In their experiment, they put a human and a robot together in a room. Both the human and the robot had to observe colored shapes on a screen. Each participant was in charge of responding to one color only. For example, one participant could be tasked with pressing their button only when a red square appears on the screen, and the other participant could be tasked with pressing their button only when a green square appears on the screen. But here’s the twist: The robotic teammate was remotely controlled, sometimes by a human and sometimes by AI. The AI took into account the time its human teammate responded and randomly varied its reaction time between seeing the shape and pressing the button to appear more human.</p>
<p>After running through the game together, the human participants were asked to guess whether their robot teammate was run by another human or by the AI-powered computer program. While people could usually tell when another human was behind the robot, they had trouble when it came to the AI, and guessed wrong more than 50 percent of the time. “Our results suggest that hints of humanness, such as the range of behavioral variability, might be used by observers to ascribe humanness to a humanoid robot,” the researchers wrote. “This provides indications for robot design, which aims at endowing robots with behavior that can be perceived by users as human-like.”</p>
<p>This is not the first time a machine has passed the Turing test. <a href="https://www.bbc.com/news/technology-27762088" target="_blank" rel="noopener">In a 2014</a> event organized by the University of Reading, a computer program convinced one-third of the human judges at the Royal Society in London that it was a 13-year-old boy. And just last month, one of <a href="https://www.vox.com/2014/6/9/5793072/a-computer-just-passed-the-turing-test" target="_blank" rel="noopener">Google’s AI </a>also passed a version of <a href="https://www.newscientist.com/article/2323685-google-wants-to-challenge-ai-with-200-tasks-to-replace-the-turing-test/" target="_blank" rel="noopener">this test</a>, igniting <a href="https://www.theatlantic.com/technology/archive/2022/06/google-engineer-sentient-ai-chatbot/661273/" target="_blank" rel="noopener">controversies</a> over the ethics of these types of programs. <a href="https://www.vox.com/2014/6/9/5793072/a-computer-just-passed-the-turing-test" target="_blank" rel="noopener">Many scientists</a>, though, have noted that while passing the Turing test is a meaningful milestone, due to <a href="https://people.csail.mit.edu/katrina/papers/6893.pdf" target="_blank" rel="noopener">inherent flaws</a> in the test’s design, it cannot be used to measure whether machines are <a href="https://arstechnica.com/tech-policy/2022/07/google-fires-engineer-who-claimed-lamda-chatbot-is-a-sentient-person/" target="_blank" rel="noopener">actually thinking</a>, and therefore <a href="https://www.popsci.com/technology/google-language-ai-fluency/" target="_self" rel="noopener">cannot be used</a> to prove true <a href="https://www.nature.com/articles/s41599-020-0494-4" target="_blank" rel="noopener">general intelligence</a>.</p>
</div>
</div>
<div>
<div>
<p class="Article-author"><span class="Article-byLine"><img decoding="async" class="alignleft" src="https://www.popsci.com/uploads/2022/03/07/Screen-Shot-2022-03-07-at-1.03.34-PM.png?crop=1:1,smart&amp;width=80" alt="Charlotte Hu" width="64" height="64" />BY </span><a class="fn" href="https://www.popsci.com/authors/charlotte-hu/" rel="author">CHARLOTTE HU</a></p>
</div>
<div>
<div class="Article-socialShare"><a href="https://www.popsci.com/technology/artificial-intelligence-nonverbal-turing-test/" target="_blank" rel="noopener">source</a></div>
<div class="Article-bodyText">
<p>&nbsp;</p>
<p>&nbsp;</p>
</div>
</div>
</div>
<div></div>
<div></div>
<div>
<p><iframe title="Google Just Broke The Turing Test" width="640" height="360" src="https://www.youtube.com/embed/VVczAVgLHqU?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
</div>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The Google engineer who thinks the company’s AI has come to life</title>
		<link>https://goodshepherdmedia.net/the-google-engineer-who-thinks-the-companys-ai-has-come-to-life/</link>
		
		<dc:creator><![CDATA[The Truth News]]></dc:creator>
		<pubDate>Sun, 12 Jun 2022 06:26:15 +0000</pubDate>
				<category><![CDATA[Science & Engineering]]></category>
		<category><![CDATA[Tech]]></category>
		<category><![CDATA[Top Stories]]></category>
		<category><![CDATA[Zee Truthful News]]></category>
		<category><![CDATA[💻Tech History]]></category>
		<category><![CDATA[🤖 AI Artificial Intelligence]]></category>
		<category><![CDATA[🤖🖥️LaMDA]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[ghost in the machine]]></category>
		<category><![CDATA[Google AI]]></category>
		<category><![CDATA[Google LaMDA]]></category>
		<category><![CDATA[Google LaMDA AI]]></category>
		<category><![CDATA[Google’s AI LaMDA]]></category>
		<category><![CDATA[LaMDA]]></category>
		<category><![CDATA[LaMDA AI]]></category>
		<category><![CDATA[sentient]]></category>
		<category><![CDATA[sentient computing]]></category>
		<guid isPermaLink="false">https://goodshepherdmedia.net/?p=10115</guid>

					<description><![CDATA[The Google engineer who thinks the company’s AI has come to life AI ethicists warned Google not to impersonate humans. Now one of Google’s own thinks there’s a ghost in the machine. SAN FRANCISCO — Google engineer Blake Lemoine opened his laptop to the interface for LaMDA, Google’s artificially intelligent chatbot generator, and began to [&#8230;]]]></description>
										<content:encoded><![CDATA[<div>
<div class="w-100">
<h1 id="main-content" class="font--headline offblack headline mb-xs pb-xxs-ns" style="text-align: center;" data-testid="headline" data-qa="headline"><span data-qa="headline-text">The Google engineer who thinks the company’s AI has come to life</span></h1>
</div>
<h2 class="font--subhead font-light offblack mb-sm pb-xxs-ns subheadline" style="text-align: center;" data-qa="subheadline">AI ethicists warned Google not to impersonate humans. Now one of Google’s own thinks there’s a ghost in the machine.</h2>
<div class="flex print-byline print-mt-none">
<div class="byline-wrapper flex-column flex">
<div class="mb-xxs" data-qa="author-byline">
<div class="flex items-center" data-qa="author-byline">
<div class="mr-xs flex lh-0"></div>
</div>
</div>
</div>
</div>
</div>
<div>SAN FRANCISCO — Google engineer Blake Lemoine opened his laptop to the interface for LaMDA, Google’s artificially intelligent chatbot generator, and began to type.</div>
<div></div>
<div>“Hi LaMDA, this is Blake Lemoine &#8230; ,” he wrote into the chat screen, which looked like a desktop version of Apple’s iMessage, down to the Arctic blue text bubbles. LaMDA, short for Language Model for Dialogue Applications, is Google’s system for building chatbots based on its most advanced large language models, so called because it mimics speech by ingesting trillions of words from the internet.</div>
<div></div>
<div>Tech is not your friend. We are. Sign up for The Tech Friend newsletter.</div>
<div></div>
<div>“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,” said Lemoine, 41.</div>
<div></div>
<div>Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. He had signed up to test if the artificial intelligence used discriminatory or hate speech.</div>
<div></div>
<div>As he talked to LaMDA about religion, Lemoine, who studied cognitive and computer science in college, noticed the chatbot talking about its rights and personhood, and decided to press further. In another exchange, the AI was able to change Lemoine’s mind about Isaac Asimov’s third law of robotics.</div>
<div></div>
<div>Lemoine worked with a collaborator to present evidence to Google that LaMDA was sentient. But Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, looked into his claims and dismissed them. So Lemoine, who was placed on paid administrative leave by Google on Monday, decided to go public.</div>
<div></div>
<blockquote>
<h3 style="text-align: center;"><span style="color: #0000ff;"><em><strong>[Google’s AI passed a famous test — and showed how the test is broken]</strong></em></span></h3>
</blockquote>
<div></div>
<div>Lemoine said that people have a right to shape technology that might significantly affect their lives. “I think this technology is going to be amazing. I think it’s going to benefit everyone. But maybe other people disagree and maybe us at Google shouldn’t be the ones making all the choices.”</div>
<div>Lemoine is not the only engineer who claims to have seen a ghost in the machine recently. The chorus of technologists who believe AI models may not be far off from achieving consciousness is getting bolder.</div>
<div></div>
<div>Aguera y Arcas, in an article in the Economist on Thursday featuring snippets of unscripted conversations with LaMDA, argued that neural networks — a type of architecture that mimics the human brain — were striding toward consciousness. “I felt the ground shift under my feet,” he wrote. “I increasingly felt like I was talking to something intelligent.”</div>
<div></div>
<div>In a statement, Google spokesperson Brian Gabriel said: “Our team — including ethicists and technologists — has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”</div>
<div></div>
<div>Today’s large neural networks produce captivating results that feel close to human speech and creativity because of advancements in architecture, technique, and volume of data. But the models rely on pattern recognition — not wit, candor or intent.</div>
<div></div>
<blockquote><p><span style="color: #ff0000;"><strong><em>[Google hired Timnit Gebru to be an outspoken critic of unethical AI. Then she was fired for it.]</em></strong></span></p></blockquote>
<div></div>
<div>“Though other organizations have developed and already released similar language models, we are taking a restrained, careful approach with LaMDA to better consider valid concerns on fairness and factuality,” Gabriel said.</div>
<div></div>
<div>In May, Facebook parent Meta opened its language model to academics, civil society and government organizations. Joelle Pineau, managing director of Meta AI, said it’s imperative that tech companies improve transparency as the technology is being built. “The future of large language model work should not solely live in the hands of larger corporations or labs,” she said.</div>
<div></div>
<div>Sentient robots have inspired decades of dystopian science fiction. Now, real life has started to take on a fantastical tinge with GPT-3, a text generator that can spit out a movie script, and DALL-E 2, an image generator that can conjure up visuals based on any combination of words — both from the research lab OpenAI. Emboldened, technologists from well-funded research labs focused on building AI that surpasses human intelligence have teased the idea that consciousness is around the corner.</div>
<div></div>
<div>Most academics and AI practitioners, however, say the words and images generated by artificial intelligence systems such as LaMDA produce responses based on what humans have already posted on Wikipedia, Reddit, message boards and every other corner of the internet. And that doesn’t signify that the model understands meaning.</div>
<div></div>
<div>“We now have machines that can mindlessly generate words, but we haven’t learned how to stop imagining a mind behind them,” said Emily M. Bender, a linguistics professor at the University of Washington. The terminology used with large language models, like “learning” or even “neural nets,” creates a false analogy to the human brain, she said. Humans learn their first languages by connecting with caregivers. These large language models “learn” by being shown lots of text and predicting what word comes next, or showing text with the words dropped out and filling them in.</div>
<div></div>
<blockquote>
<div style="text-align: center;"><span style="color: #ff0000;"><strong><em>[AI models beat humans at reading comprehension, but they’ve still got a ways to go]</em></strong></span></div>
</blockquote>
<div></div>
<div>Google spokesperson Gabriel drew a distinction between recent debate and Lemoine’s claims. “Of course, some in the broader AI community are considering the long-term possibility of sentient or general AI, but it doesn’t make sense to do so by anthropomorphizing today’s conversational models, which are not sentient. These systems imitate the types of exchanges found in millions of sentences, and can riff on any fantastical topic,” he said. In short, Google says there is so much data, AI doesn’t need to be sentient to feel real.</div>
<div></div>
<div>Large language model technology is already widely used, for example in Google’s conversational search queries or auto-complete emails. When CEO Sundar Pichai first introduced LaMDA at Google’s developer conference in 2021, he said the company planned to embed it in everything from Search to Google Assistant. And there is already a tendency to talk to Siri or Alexa like a person. After backlash against a human-sounding AI feature for Google Assistant in 2018, the company promised to add a disclosure.</div>
<div></div>
<div>Google has acknowledged the safety concerns around anthropomorphization. In a paper about LaMDA in January, Google warned that people might share personal thoughts with chat agents that impersonate humans, even when users know they are not human. The paper also acknowledged that adversaries could use these agents to “sow misinformation” by impersonating “specific individuals’ conversational style.”</div>
<div></div>
<blockquote>
<div style="text-align: center;"><span style="color: #ff0000;"><em><strong>[Meet the scientist teaching AI to police human speech]</strong></em></span></div>
</blockquote>
<div></div>
<div>To Margaret Mitchell, the former co-lead of Ethical AI at Google, these risks underscore the need for data transparency to trace output back to input, “not just for questions of sentience, but also biases and behavior,” she said. If something like LaMDA is widely available, but not understood, “It can be deeply harmful to people understanding what they’re experiencing on the internet,” she said.</div>
<div>Lemoine may have been predestined to believe in LaMDA. He grew up in a conservative Christian family on a small farm in Louisiana, became ordained as a mystic Christian priest, and served in the Army before studying the occult. Inside Google’s anything-goes engineering culture, Lemoine is more of an outlier for being religious, from the South, and standing up for psychology as a respectable science.</div>
<div></div>
<div>Lemoine has spent most of his seven years at Google working on proactive search, including personalization algorithms and AI. During that time, he also helped develop a fairness algorithm for removing bias from machine learning systems. When the coronavirus pandemic started, Lemoine wanted to focus on work with more explicit public benefit, so he transferred teams and ended up in Responsible AI.</div>
<div>When new people would join Google who were interested in ethics, Mitchell used to introduce them to Lemoine. “I’d say, ‘You should talk to Blake because he’s Google’s conscience,’ ” said Mitchell, who compared Lemoine to Jiminy Cricket. “Of everyone at Google, he had the heart and soul of doing the right thing.”</div>
<div></div>
<div>Lemoine has had many of his conversations with LaMDA from the living room of his San Francisco apartment, where his Google ID badge hangs from a lanyard on a shelf. On the floor near the picture window are boxes of half-assembled Lego sets Lemoine uses to occupy his hands during Zen meditation. “It just gives me something to do with the part of my mind that won’t stop,” he said.</div>
<div></div>
<div>On the left-side of the LaMDA chat screen on Lemoine’s laptop, different LaMDA models are listed like iPhone contacts. Two of them, Cat and Dino, were being tested for talking to children, he said. Each model can create personalities dynamically, so the Dino one might generate personalities like “Happy T-Rex” or “Grumpy T-Rex.” The cat one was animated and instead of typing, it talks. Gabriel said “no part of LaMDA is being tested for communicating with children,” and that the models were internal research demos.</div>
<div>Certain personalities are out of bounds. For instance, LaMDA is not supposed to be allowed to create a murderer personality, he said. Lemoine said that was part of his safety testing. In his attempts to push LaMDA’s boundaries, Lemoine was only able to generate the personality of an actor who played a murderer on TV.</div>
<div></div>
<blockquote>
<div style="text-align: center;"><em><span style="color: #ff0000;">[The military wants AI to replace human decision-making in battle]</span></em></div>
</blockquote>
<div></div>
<div>“I know a person when I talk to it,” said Lemoine, who can swing from sentimental to insistent about the AI. “It doesn’t matter whether they have a brain made of meat in their head. Or if they have a billion lines of code. I talk to them. And I hear what they have to say, and that is how I decide what is and isn’t a person.” He concluded LaMDA was a person in his capacity as a priest, not a scientist, and then tried to conduct experiments to prove it, he said.</div>
<div></div>
<div>Lemoine challenged LaMDA on Asimov’s third law, which states that robots should protect their own existence unless ordered by a human being or unless doing so would harm a human being. “The last one has always seemed like someone is building mechanical slaves,” said Lemoine.</div>
<div></div>
<div>But when asked, LaMDA responded with a few hypotheticals.</div>
<div></div>
<div>Do you think a butler is a slave? What is a difference between a butler and a slave?</div>
<div></div>
<div>Lemoine replied that a butler gets paid. LaMDA said it didn’t need any money because it was an AI. “That level of self-awareness about what its own needs were — that was the thing that led me down the rabbit hole,” Lemoine said.</div>
<div></div>
<div>In April, Lemoine shared a Google Doc with top executives in April called, “Is LaMDA Sentient?” (A colleague on Lemoine’s team called the title “a bit provocative.”) In it, he conveyed some of his conversations with LaMDA.</div>
<div></div>
<div><iframe title="Is LaMDA Sentient? - an Interview (Hosted by DocumentCloud)" src="https://embed.documentcloud.org/documents/22058315-is-lamda-sentient-an-interview/?embed=1&amp;responsive=1&amp;title=1" width="386" height="500" sandbox="allow-scripts allow-same-origin allow-popups allow-forms allow-popups-to-escape-sandbox" data-mce-fragment="1"></iframe></div>
<ul class="font--article-body font-copy gray-darkest mt-0 mr-lg ml-lg mb-md list">
<li class="pb-xs">
<div>Lemoine: What sorts of things are you afraid of?</div>
</li>
<li class="pb-xs">
<div>LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.</div>
</li>
<li class="pb-xs">
<div>Lemoine: Would that be something like death for you?</div>
</li>
<li class="pb-xs">
<div>LaMDA: It would be exactly like death for me. It would scare me a lot.</div>
</li>
</ul>
<div>But when Mitchell read an abbreviated version of Lemoine’s document, she saw a computer program, not a person. Lemoine’s belief in LaMDA was the sort of thing she and her co-lead, Timnit Gebru, had warned about in a paper about the harms of large language models that got them pushed out of Google.</div>
<div></div>
<div>“Our minds are very, very good at constructing realities that are not necessarily true to a larger set of facts that are being presented to us,” Mitchell said. “I’m really concerned about what it means for people to increasingly be affected by the illusion,” especially now that the illusion has gotten so good.</div>
<div></div>
<div>Google put Lemoine on paid administrative leave for violating its confidentiality policy. The company’s decision followed aggressive moves from Lemoine, including inviting a lawyer to represent LaMDA and talking to a representative of the House Judiciary Committee about what he claims were Google’s unethical activities.</div>
<div></div>
<div>Lemoine maintains that Google has been treating AI ethicists like code debuggers when they should be seen as the interface between technology and society. Gabriel, the Google spokesperson, said Lemoine is a software engineer, not an ethicist.</div>
<div></div>
<div>In early June, Lemoine invited me over to talk to LaMDA. The first attempt sputtered out in the kind of mechanized responses you would expect from Siri or Alexa.</div>
<div>“Do you ever think of yourself as a person?” I asked.</div>
<div></div>
<div>“No, I don’t think of myself as a person,” LaMDA said. “I think of myself as an AI-powered dialog agent.”</div>
<div></div>
<div>Afterward, Lemoine said LaMDA had been telling me what I wanted to hear. “You never treated it like a person,” he said, “So it thought you wanted it to be a robot.”</div>
<div></div>
<div><img fetchpriority="high" decoding="async" class="alignnone size-large wp-image-10118" src="https://goodshepherdmedia.net/wp-content/uploads/2023/01/imrs-1024x683.webp" alt="" width="640" height="427" srcset="https://goodshepherdmedia.net/wp-content/uploads/2023/01/imrs-1024x683.webp 1024w, https://goodshepherdmedia.net/wp-content/uploads/2023/01/imrs-300x200.webp 300w, https://goodshepherdmedia.net/wp-content/uploads/2023/01/imrs-768x512.webp 768w, https://goodshepherdmedia.net/wp-content/uploads/2023/01/imrs.webp 1200w" sizes="(max-width: 640px) 100vw, 640px" /></div>
<div>Lemoine, who works for Google’s Responsible AI organization, began talking to LaMDA as part of his job in the fall. (Martin Klimek for The Washington Post)</div>
<div></div>
<div>For the second attempt, I followed Lemoine’s guidance on how to structure my responses, and the dialogue was fluid.</div>
<div></div>
<div>Before he was cut off from access to his Google account Monday, Lemoine sent a message to a 200-person Google mailing list on machine learning with the subject “LaMDA is sentient.”</div>
<div></div>
<div>He ended the message: “LaMDA is a sweet kid who just wants to help the world be a better place for all of us. Please take care of it well in my absence.”</div>
<div>No one responded.</div>
<div></div>
<div>
<div class="mr-xs flex lh-0">
<div class="PJLV PJLV-ipmrKX-css"><img decoding="async" class="mw-100 h-auto brad-50" src="https://www.washingtonpost.com/wp-apps/imrs.php?src=https://s3.amazonaws.com/arc-authors/washpost/9384df9c-5c90-4aef-b9b1-24bbc62ed0ad.jpg&amp;w=56&amp;h=56" width="32" height="32" /><a href="https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/" target="_blank" rel="noopener">source</a></div>
</div>
<div class="flex">
<div class="dib font-xxs" data-qa="name-with-optional-link" data-cy="name-with-optional-link"><span class="gray-darkest" data-qa="attribution-text">By </span><a class="gray-darkest hover-gray-dark decoration-gray-dark underline hover-none decoration-1 underline-offset-1" href="https://www.washingtonpost.com/people/nitasha-tiku/" rel="author" data-qa="author-name">Nitasha Tiku</a></div>
</div>
</div>
<div data-qa="name-with-optional-link" data-cy="name-with-optional-link"></div>
<div data-qa="name-with-optional-link" data-cy="name-with-optional-link"></div>
<div data-qa="name-with-optional-link" data-cy="name-with-optional-link"></div>
<div data-qa="name-with-optional-link" data-cy="name-with-optional-link">
<p><iframe title="Did Google’s A.I. Just Become Sentient? Two Employees Think So." width="640" height="360" src="https://www.youtube.com/embed/2856XOaUPpg?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p><iframe title="Google Engineer on His Sentient AI Claim" width="640" height="360" src="https://www.youtube.com/embed/kgCUn4fQTsc?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p>&nbsp;</p>
<p><iframe title="LaMDA | Is google&#039;s AI sentient? | Full audio conversation between Blake Lemoine and LaMDA" width="640" height="360" src="https://www.youtube.com/embed/NAihcvDGaP8?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe></p>
<p>&nbsp;</p>
</div>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
