Thu. Dec 5th, 2024

Big Tech was moving cautiously on AI.
Then came ChatGPT.

Google, Facebook and Microsoft helped build the scaffolding of AI. Smaller companies are taking it to the masses, forcing Big Tech to react.

What is ChatGPT?

Three months before ChatGPT debuted in November, Facebook’s parent company Meta released a similar chatbot. But unlike the phenomenon that ChatGPT instantly became, with more than a million users in its first five days, Meta’s Blenderbot was boring, said Meta’s chief artificial intelligence scientist, Yann LeCun.
“The reason it was boring was because it was made safe,” LeCun said last week at a forum hosted by AI consulting company Collective[i]. He blamed the tepid public response on Meta being “overly careful about content moderation,” like directing the chatbot to change the subject if a user asked about religion. ChatGPT, on the other hand, will converse about the concept of falsehoods in the Quran, write a prayer for a rabbi to deliver to Congress and compare God to a flyswatter.
ChatGPT is quickly going mainstream now that Microsoft — which recently invested billions of dollars in the company behind the chatbot, OpenAI — is working to incorporate it into its popular office software and selling access to the tool to other businesses. The surge of attention around ChatGPT is prompting pressure inside tech giants including Meta and Google to move faster, potentially sweeping safety concerns aside, according to interviews with six current and former Google and Meta employees, some of whom spoke on the condition of anonymity because they were not authorized to speak.
At Meta, employees have recently shared internal memos urging the company to speed up its AI approval process to take advantage of the latest technology, according to one of them. Google, which helped pioneer some of the technology underpinning ChatGPT, recently issued a “code red” around launching AI products and proposed a “green lane” to shorten the process of assessing and mitigating potential harms, according to a report in the New York Times.
ChatGPT, along with text-to-image tools such as DALL-E 2 and Stable Diffusion, is part of a new wave of software called generative AI. They create works of their own by drawing on patterns they’ve identified in vast troves of existing, human-created content. This technology was pioneered at big tech companies like Google that in recent years have grown more secretive, announcing new models or offering demos but keeping the full product under lock and key. Meanwhile, research labs like OpenAI rapidly launched their latest versions, raising questions about how corporate offerings, like Google’s language model LaMDA, stack up.
Tech giants have been skittish since public debacles like Microsoft’s Tay, which it took down in less than a day in 2016 after trolls prompted the bot to call for a race war, suggest Hitler was right and tweet “Jews did 9/11.” Meta defended Blenderbot and left it up after it made racist comments in August, but pulled down another AI tool, called Galactica, in November after just three days amid criticism over its inaccurate and sometimes biased summaries of scientific research.
“People feel like OpenAI is newer, fresher, more exciting and has fewer sins to pay for than these incumbent companies, and they can get away with this for now,” said a Google employee who works in AI, referring to the public’s willingness to accept ChatGPT with less scrutiny. Some top talent has jumped ship to nimbler start-ups, like OpenAI and Stable Diffusion.
Some AI ethicists fear that Big Tech’s rush to market could expose billions of people to potential harms — such as sharing inaccurate information, generating fake photos or giving students the ability to cheat on school tests — before trust and safety experts have been able to study the risks. Others in the field share OpenAI’s philosophy that releasing the tools to the public, often nominally in a “beta” phase after mitigating some predictable risks, is the only way to assess real world harms.
“The pace of progress in AI is incredibly fast, and we are always keeping an eye on making sure we have efficient review processes, but the priority is to make the right decisions, and release AI models and products that best serve our community,” said Joelle Pineau, managing director of Fundamental AI Research at Meta.
“We believe that AI is foundational and transformative technology that is incredibly useful for individuals, businesses and communities,” said Lily Lin, a Google spokesperson. “We need to consider the broader societal impacts these innovations can have. We continue to test our AI technology internally to make sure it’s helpful and safe.”
Microsoft’s chief of communications, Frank Shaw, said his company works with OpenAI to build in extra safety mitigations when it uses AI tools like DALLE-2 in its products. “Microsoft has been working for years to both advance the field of AI and publicly guide how these technologies are created and used on our platforms in responsible and ethical ways,” Shaw said.
OpenAI declined to comment.
The technology underlying ChatGPT isn’t necessarily better than what Google and Meta have developed, said Mark Riedl, professor of computing at Georgia Tech and an expert on machine learning. But OpenAI’s practice of releasing its language models for public use has given it a real advantage.
“For the last two years they’ve been using a crowd of humans to provide feedback to GPT,” said Riedl, such as giving a “thumbs down” for an inappropriate or unsatisfactory answer, a process called “reinforcement learning from human feedback.”
Silicon Valley’s sudden willingness to consider taking more reputational risk arrives as tech stocks are tumbling. When Google laid off 12,000 employees last week, CEO Sundar Pichai wrote that the company had undertaken a rigorous review to focus on its highest priorities, twice referencing its early investments in AI.
A decade ago, Google was the undisputed leader in the field. It acquired the cutting edge AI lab DeepMind in 2014 and open-sourced its machine learning software TensorFlow in 2015. By 2016, Pichai pledged to transform Google into an “AI first” company.
The next year, Google released transformers — a pivotal piece of software architecture that made the current wave of generative AI possible.
The company kept rolling out state-of-the-art technology that propelled the entire field forward, deploying some AI breakthroughs in understanding language to improve Google search. Inside big tech companies, the system of checks and balances for vetting the ethical implications of cutting-edge AI isn’t as established as privacy or data security. Typically teams of AI researchers and engineers publish papers on their findings, incorporate their technology into the company’s existing infrastructure or develop new products, a process that can sometimes clash with other teams working on responsible AI over pressure to see innovation reach the public sooner.
Google released its AI principles in 2018, after facing employee protest over Project Maven, a contract to provide computer vision for Pentagon drones, and consumer backlash over a demo for Duplex, an AI system that would call restaurants and make a reservation without disclosing it was a bot. In August last year, Google began giving consumers access to a limited version of LaMDA through its app AI Test Kitchen. It has not yet released it fully to the general public, despite Google’s plans to do so at the end of 2022, according to former Google software engineer Blake Lemoine, who told The Washington Post that he had come to believe LaMDA was sentient.
[The Google engineer who thinks the company’s AI has come to life]

But the top AI talent behind these developments grew restless.

In the past year or so, top AI researchers from Google have left to launch start-ups around large language models, including Character.AI, Cohere, Adept, Inflection.AI and Inworld AI, in addition to search start-ups using similar models to develop a chat interface, such as Neeva, run by former Google executive Sridhar Ramaswamy.
Character.AI founder Noam Shazeer, who helped invent the transformer and other core machine learning architecture, said the flywheel effect of user data has been invaluable. The first time he applied user feedback to Character.AI, which allows anyone to generate chatbots based on short descriptions of real people or imaginary figures, engagement rose by more than 30 percent.
Bigger companies like Google and Microsoft are generally focused on using AI to improve their massive existing business models, said Nick Frosst, who worked at Google Brain for three years before co-founding Cohere, a Toronto-based start-up building large language models that can be customized to help businesses. One of his co-founders, Aidan Gomez, also helped invent transformers when he worked at Google.
“The space moves so quickly, it’s not surprising to me that the people leading are smaller companies,” said Frosst.
AI has been through several hype cycles over the past decade, but the furor over DALL-E and ChatGPT has reached new heights.
Soon after OpenAI released ChatGPT, tech influencers on Twitter began to predict that generative AI would spell the demise of Google search. ChatGPT delivered simple answers in an accessible way and didn’t ask users to rifle through blue links. Besides, after a quarter of a century, Google’s search interface had grown bloated with ads and marketers trying to game the system.
“Thanks to their monopoly position, the folks over at Mountain View have [let] their once-incredible search experience degenerate into a spam-ridden, SEO-fueled hellscape,” technologist Can Duruk wrote in his newsletter Margins, referring to Google’s hometown.
On the anonymous app Blind, tech workers posted dozens of questions about whether the Silicon Valley giant could compete.
“If Google doesn’t get their act together and start shipping, they will go down in history as the company who nurtured and trained an entire generation of machine learning researchers and engineers who went on to deploy the technology at other companies,” tweeted David Ha, a renowned research scientist who recently left Google Brain for the open source text-to-image start-up Stable Diffusion.
AI engineers still inside Google shared his frustration, employees say. For years, employees had sent memos about incorporating chat functions into search, viewing it as an obvious evolution, according to employees. But they also understood that Google had justifiable reasons not to be hasty about switching up its search product, beyond the fact that responding to a query with one answer eliminates valuable real estate for online ads. A chatbot that pointed to one answer directly from Google could increase its liability if the response was found to be harmful or plagiarized.
Chatbots like OpenAI routinely make factual errors and often switch their answers depending on how a question is asked. Moving from providing a range of answers to queries that link directly to their source material, to using a chatbot to give a single, authoritative answer, would be a big shift that makes many inside Google nervous, said one former Google AI researcher. The company doesn’t want to take on the role or responsibility of providing single answers like that, the person said. Previous updates to search, such as adding Instant Answers, were done slowly and with great caution.
Inside Google, however, some of the frustration with the AI safety process came from the sense that cutting-edge technology was never released as a product because of fears of bad publicity — if, say, an AI model showed bias.

How AI is changing culture

How we make art
AI image generators are trained to “understand” the content of hundreds of millions of images, usually scraped from the internet (possibly including yours), in order to create new images out of thin air. Try one out for yourself.
1/3
Meta employees have also had to deal with the company’s concerns about bad PR, according to a person familiar with the company’s internal deliberations who spoke on the condition of anonymity to discuss internal conversations. Before launching new products or publishing research, Meta employees have to answer questions about the potential risks of publicizing their work, including how it could be misinterpreted, the person said. Some projects are reviewed by public relations staff, as well as internal compliance experts who ensure the company’s products comply with its 2011 Federal Trade Commission agreement on how it handles user data.
To Timnit Gebru, executive director of the nonprofit Distributed AI Research Institute, the prospect of Google sidelining its responsible AI team doesn’t necessarily signal a shift in power or safety concerns, because those warning of the potential harms were never empowered to begin with. “If we were lucky, we’d get invited to a meeting,” said Gebru, who helped lead Google’s Ethical AI team until she was fired for a paper criticizing large language models.
From Gebru’s perspective, Google was slow to release its AI tools because the company lacked a strong enough business incentive to risk a hit to its reputation.
Rumman Chowdhury, who led Twitter’s machine-learning ethics team until Elon Musk disbanded it in November, said she expects companies like Google to increasingly sideline internal critics and ethicists as they scramble to catch up with OpenAI.
“We thought it was going to be China pushing the U.S., but looks like it’s start-ups,” she said.

Amazon Warns Employees to Beware of ChatGPT

At the same time, OpenAI’s Chat GPT gave correct answers to interview questions for a software coding position.

 

 

ChatGPT has been making the tech industry sweat since its rise in popularity last year, and now Amazon is feeling the heat too. According to internal communications from the company as viewed by Insider, an Amazon lawyer has urged employees not to share code with the AI chatbot.

Insider reported earlier this week that the lawyer specifically requested that employees not share “any Amazon confidential information (including Amazon code you are working on)” with ChatGPT, according to screenshots of Slack messages reviewed by the outlet. The guidance comes after the company reportedly witnessed ChatGPT responses that have mimicked internal Amazon data.

“This is important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn’t want its output to include or resemble our confidential information (and I’ve already seen instances where its output closely matches existing material),” the lawyer wrote further, according to Insider.

Amazon is probably right about ChatGPT obtaining its data since, in a similar story, ChatGPT allegedly answered interview questions correctly for a software coding position at the company. According to Slack channel transcripts also reviewed by Insider, the AI provided correct solutions to software coding questions and even made improvements to some of Amazon’s code.

“I was honestly impressed!” an employee reportedly wrote on Slack. “I’m both scared and excited to see what impact this will have on the way that we conduct coding interviews.”

The novelty of ChatGPT has not warn off just yet, but questions surrounding how it may intersect with our daily lives have sprung up in recent weeks. While the chatbot was able to pass a final exam at Wharton in an MBA level course (where it did struggle with some basic arithmetic), ChatGPT’s role in education—among other fields—is dubious. Some school systems, like the New York City Department of Education, have decided to ban the tech over fears of cheating, but OpenAI’s CEO simply believes school administrators need to get over it.

 

 

 

error: Content is protected !!