Sat. Jul 20th, 2024

The Twitter Files Part 3 reveals what led to Trump’s removal from social media platform

The Removal Of Donald Trump – Part 1 • Matt Taibbi • (@mtaibbi)

Matt Taibbi reported on the third installment of Elon Musk’s ‘Twitter Files’

The third installment of Elon Musk’s Twitter Files delved into what led to the removal of former President Trump’s account.

In “part one” of the third installment, which dates from October 2020-January 6th, Substack writer Matt Taibbi told his followers, “We’ll show you what hasn’t been revealed: the erosion of standards within the company in months before J6, decisions by high-ranking executives to violate their own policies, and more, against the backdrop of ongoing, documented interaction with federal agencies.”

“Whatever your opinion on the decision to remove Trump that day, the internal communications at Twitter between January 6th-January 8th have clear historical import. Even Twitter’s employees understood in the moment it was a landmark moment in the annals of speech,” Taibbi wrote on Friday.

He then shared a screenshot of a Twitter employee asking, “Is this the first sitting head of state to ever be suspended?”

Former President Donald Trump announces he is running for president for the third time at Mar-a-Lago in Palm Beach, Fla., Nov. 15, 2022. (AP Photo/Andrew Harnik, File)
Former President Donald Trump announces he is running for president for the third time at Mar-a-Lago in Palm Beach, Fla., Nov. 15, 2022. (AP Photo/Andrew Harnik, File)

Taibbi reported that executives at Twitter “started processing new power” following their decision to ban Trump, indicating they were “prepared to ban future presidents and White Houses – perhaps even Joe Biden. The ‘new administration,’ says one exec, ‘will not be suspended by Twitter unless absolutely necessary.’”

One unnamed executive alleged the “context surrounding” the actions of Trump and his supporters “over the course of the election and frankly last 4+ years” contributed to the ban.

“In the end, they looked at a broad picture. But that approach can cut both ways,” Taibbi wrote. “The bulk of the internal debate leading to Trump’s ban took place in those three January days. However, the intellectual framework was laid in the months preceding the Capitol riots.”

After alluding to the second installment of the Twitter Files that addressed the shadowbanning of conservatives, Taibbi reported, “As the election approached, senior executives – perhaps under pressure from federal agencies, with whom they met more as time progressed – increasingly struggled with rules, and began to speak of ‘vios’ as pretexts to do what they’d likely have done anyway.”

Taibbi then shared internal Slack messages from Yoel Roth, the former head of trust and safety at Twitter, who made light out of heightened discussions with federal agencies following Jan. 6. Roth joked about the lack of “generic enough” calendar descriptions to conceal the “very interesting” meetings he had.

“One particular slack channel offers an unique window into the evolving thinking of top officials in late 2020 and early 2021,” Taibbi reported. “On October 8th, 2020, executives opened a channel called ‘us2020_xfn_enforcement.’ Through J6, this would be home for discussions about election-related removals, especially ones that involved ‘high-profile’ accounts (often called ‘VITs’ or ‘Very Important Tweeters’).”

He continued, “There was at least some tension between Safety Operations – a larger department whose staffers used a more rules-based process for addressing issues like porn, scams, and threats – and a smaller, more powerful cadre of senior policy execs like Roth and [Twitter’s former trust and policy chief Vijaya] Gadde. The latter group were a high-speed Supreme Court of moderation, issuing content rulings on the fly, often in minutes and based on guesses, gut calls, even Google searches, even in cases involving the President.”

“During this time, executives were also clearly liaising with federal enforcement and intelligence agencies about moderation of election-related content. While we’re still at the start of reviewing the #TwitterFiles, we’re finding out more about these interactions every day,” Taibbi added.

Taibbi then turned to correspondence involving Twitter’s policy director Nick Pickles with one employee asking him if Twitter should say it “detects ‘misinfo’ through ‘ML, human review, and **partnerships with outside experts?*’ adding, “I know that’s been a slippery process… not sure if you want our public explanation to hang on that.”

Pickles himself is seen asking if Twitter could “just say ‘partnerships,’ later writing, “e.g. not sure we’d describe the FBI/DHS as experts.”

Taibbi revealed that Roth not only met with the FBI, DHS and the Office of Director of National Intelligence, he apparently discussed the censoring of the Hunter Biden laptop story.

“Some of Roth’s later Slacks indicate his weekly confabs with federal law enforcement involved separate meetings. Here, he ghosts the FBI and DHS, respectively, to go first to an ‘Aspen Institute thing,’ then take a call with Apple,” Taibbi wrote while sharing a screenshot of an exchange.

Taibbi shared another message, explaining, “Here, the FBI sends reports about a pair of tweets,” one from Twitter user John Basham claiming “Between 2% and 25% of Ballots by Mail are Being Rejected for Errors.”

“The FBI-flagged tweet then got circulated in the enforcement Slack. Twitter cited Politifact to say the first story was ‘proven to be false,’ then noted the second was already deemed ‘no vio on numerous occasions,’” Taibbi reported. “The group then decides to apply a ‘Learn how voting is safe and secure’ label because one commenter says, ‘it’s totally normal to have a 2% error rate.’ Roth then gives the final go-ahead to the process initiated by the FBI.”

“Examining the entire election enforcement Slack, we didn’t see one reference to moderation requests from the Trump campaign, the Trump White House, or Republicans generally. We looked. They may exist: we were told they do. However, they were absent here,” Taibbi wrote.

Taibbi then turned to an October 2020 tweet made by former Arkansas Gov. Mike Huckabee, who joked that he voted with his mail-in ballot and then “voted the ballots of my deceased parents and grandparents. They vote just like me!”

This led to internal discussions with one Twitter employee saying “I agree it’s a joke… but he’s also literally admitting in a tweet a crime.”

“The group declares Huck’s an ‘edge case,’ and though one notes, ‘we don’t make exceptions for jokes or satire,’ they ultimately decide to leave him be, because ‘we’ve poked enough bears,’” Taibbi reported. “‘Could still mislead people… could still mislead people,’ the humor-averse group declares, before moving on from Huckabee. Roth suggests moderation even in this absurd case could depend on whether or not the joke results in ‘confusion.’ This seemingly silly case actually foreshadows serious later issues.”

Journalist Matt Taibbi revealed the third installment of the "Twitter Files" on Friday. (Daniel Zuchnik/WireImage/Getty Images)
Journalist Matt Taibbi revealed the third installment of the “Twitter Files” on Friday. (Daniel Zuchnik/WireImage/Getty Images)





Taibbi reported that Twitter executives “often expand criteria to subjective issues like intent (yes, a video is authentic, but why was it shown?), orientation (was a banned tweet shown to condemn, or support?), or reception (did a joke cause “confusion”?),” adding “This reflex will become key in J6.”

He then highlighted an October 2020 tweet Trump made declaring “A Rigged Election!” in response to a local report out of Ohio where nearly 50,000 voters received the wrong ballot.

Twitter employees were preparing to slap a “mail-in voting is safe” label on Trump’s tweet but ultimately refrained after realizing “the events took place” and was “factually accurate,” internal communications show.

Just days before the election, a tweet Trump made that read “Big problems and discrepancies with Mail In Ballots all over the USA,” was “visibility filtered” so that no one could like or share the tweet.

“Here, senior execs didn’t appear to have a particular violation, but still worked fast to make sure a fairly anodyne Trump tweet couldn’t be ‘replied to, shared, or liked,'” Taibbi wrote.

“Very well done on speed,” Roth commended his staff for acting quickly on the Trump tweet.

Actor James Woods, who had a tweet flagged by the DNC as revealed in the first batch of the “Twitter Files” pertaining to Hunter Biden, resurfaced in the third batch, this time for a tweet angrily reacting to the warning label it slapped onto the Trump tweet.'Twitter Files' Part 3 reveal details of Trump's removal

Twitter Employees discussed punishing Woods and suggest they “hit him hard on future vio with firmer basis.”

Taibbi then pivoted to another tweet made by Rep. Jody Hice, R-Ga, which was hit with a “stay informed” label after writing “Say NO to big tech censorship!… Mailed ballots are more prone to fraud than in-person balloting… It’s just common sense.”

Twitter decided to go with “soft intervention” with Roth concerned about “wah wah censorship” backlash, according to released discussions.

“Meanwhile, there are multiple instances of involving pro-Biden tweets warning Trump ‘may try to steal the election’ that got surfaced, only to be approved by senior executives. This one, they decide, just ‘expresses concern that mailed ballots might not make it on time.'” Taibbi reported.

The tweet, written by blue-check Twitter user Elijah Daniel, read “they’re going to try to steal the election. you have one week, if you haven’t voted yet- don’t mail. drop it off or vote early.”

Taibbi revealed that the hashtag #StealOurVotes, which referred to Trump and the conservative-leaning Supreme Court with the addition of Justice Amy Coney Barrett would steal the election, was “approved” by Twitter employees since it was “understandable.”

In another example, an October 2020 tweet from former Attorney General Eric Holder claimed the Postal Service was being “deliberately crippled” by the Trump administration. It was initially hit with a warning label, which Roth asked to reverse.

“Some executives wanted to use the new deamplification tool to silently limit Trump’s reach more right away,” Taibbi wrote, noting they wanted to suppress a Trump tweet from Dec. 10. “However, in the end, the team had to use older, less aggressive labeling tools at least for that day, until the ‘L3 entities’ went live the following morning.”

“The significance is that it shows that Twitter, in 2020 at least, was deploying a vast range of visible and invisible tools to rein in Trump’s engagement, long before J6. The ban will come after other avenues are exhausted,” Taibbi tweeted. “In Twitter docs execs frequently refer to ‘bots,’ e.g. ‘let’s put a bot on that.’ A bot is just any automated heuristic moderation rule. It can be anything: every time a person in Brazil uses ‘green’ and ‘blob’ in the same sentence, action might be taken.”

He highlighted an example where Twitter moderators added a “bot” to a Trump claim made on Breitbart which would be “invisibly watching” both Trump and Breitbart.

Part Three of the "Twitter Files" laid out the platform's efforts to suppress former President Trump's tweets before ultimately removing him in January 2021. (Reuters)
Part Three of the “Twitter Files” laid out the platform’s efforts to suppress former President Trump’s tweets before ultimately removing him in January 2021. (Reuters)

Taibbi took time to explain what certain key terms Twitter employees used in its censorship practices. To “bounce,” for example, meant to give a Twitter account a timeout, which usually lasted 12 hours. “Interstitial” means putting physical label atop of a tweet so it cannot be seen. “PII,” which stands for “Public Interest Interstitial” is an “interstitial” applied for “‘public interests’ reasons.” “Proactive V” refers to proactive visibility filtering.

“This is all necessary background to J6. Before the riots, the company was engaged in an inherently insane/impossible project, trying to create an ever-expanding, ostensibly rational set of rules to regulate every conceivable speech situation that might arise between humans,” Taibbi wrote. “This project was preposterous yet its leaders were unable to see this, having become infected with groupthing, coming to believe – sincerely – that it was Twitter’s responsibility to control, as much as possible, what people could talk about, how often, and with whom.”

Taibbi shed light on the “panic” that erupted internally on Jan. 6. In one exchange Roth is shown executing the “bouncing” of Trump’s account.

“This theme of Policy perhaps being stressed by queries from Communications executives – who themselves have to answer the public’s questions – occasionally appears,” Taibbi wrote. “The first company-wide email from Gadde on January 6th announced that 3 Trump tweets had been bounced, but more importantly signaled a determination to use legit ‘violations’ as a guide for any possible permanent suspension.”

A screenshot of an exchange reacting lividly to a Trump tweet telling his supporters “”Go home with love & in peace.”

“What the actual f—?” one employee reacted. “Sorry, I actually got emotionally angry seeing that. Turns out I’m not a full robot. Who knew?”

Twitter executives decided to ban then-President Donald Trump from their social media platform after the Jan. 6, 2021, attack on the US Capitol, while maintaining regular check-ins with the FBI and other federal authorities as they decided what postings should be targeted for censorship, the latest report from new company CEO Elon Musk reveals.

In one of a series of tweets Friday evening, independent journalist Matt Taibbi said internal company messages showed how Twitter’s internal standards eroded during the months leading up to Jan. 6, with high-ranking executives violating their own policies while interacting with various federal agencies.

Taibbi posted messages that he said “show Twitter executives getting a kick out of intensified relationships with federal agencies.”

In one, Yoel Roth, then Twitter’s head of trust and safety, appears to describe how he struggled to disguise the purpose of weekly meetings with FBI and other government officials that helped guide the company’s decisions on policing posts on its platform.

“I’m a big believer in calendar transparency. But I reached a certain point where my meetings became…very interesting…to people and there weren’t meeting names generic enough to cover,” he wrote.

In response, someone whose identity is obscured, suggested, “Very Boring Business Meeting That Is Definitely Not About Trump :)”

Another message shows Yoel Roth, lamenting the personal fallout from Twitter’s apparently on-the-fly decision to suppress The Post’s exclusive, October 2020 scoop about Hunter Biden’s infamous laptop on the unfounded assertion it was based on “hacked materials.”

“We blocked the NYP story, then we unblocked it (but said the opposite)…and now we’re in a messy situation where our policy is in shambles, comms [public relations] is angry, reporters think we’re idiots and refactoring an exceedingly complex policy 18 days out from the election,” he wrote.

Roth added: “In short, FML [f–k my life].

Taibbi said some redacted messages showed “the internal debate leading to Trump’s ban.”

One message said, “we currently analyze tweets and consider them at a tweet-by-tweet basis which does not appropriately take into account the context surrounding.”

It continued, “you can use the yelling fire into a crowded theater example — context matters and the narrative that trump and his friends have pursued over the course of this election and frankly last 4+ years must be taken into account.”

Taibbi wrote, “Before J6, Twitter was a unique mix of automated, rules-based enforcement, and more subjective moderation by senior executives.”

“As the election approached, senior executives — perhaps under pressure from federal agencies, with whom they met more as time progressed — increasingly struggled with rules, and began to speak of ‘vios’ [violations] as pretexts to do what they’d likely have done anyway,” he added.

Before banning Trump, Twitter execs started slapping his tweets with warning labels and on Dec. 10, 2020, Taibbi said, a message shows that “Twitter executives announced a new ‘L3 deamplification’ tool” to also limit users from sharing Trump’s messages.

“Some executives wanted to use the new deamplification tool to silently limit Trump’s reach more right away,” Taibbi wrote.

Taibbi said the first tweet under consideration included a video of US Rep. Jim Jordan (R-Ohio) appearing on the conservative Newsmax cable TV station over the false claim that “Trump got 11 million more votes.”

“However, in the end, the team had to use older, less aggressive labeling tools at least for that day, until the ‘L3 entities’ went live the following morning,” Taibbi wrote.

“The significance is that it shows that Twitter, in 2020 at least, was deploying a vast range of visible and invisible tools to rein in Trump’s engagement, long before J6. The ban will come after other avenues are exhausted.”

Taibbi’s tweets came a day after fellow independent journalist Bari Weiss posted photos showing how Twitter used secret tools to “shadow ban” certain users and suppress their posts on the platform.

Dr. Jay Bhattacharya was put on Twitter’s “Trends Blacklist” after arguing against COVID-19 lockdowns, leading him to tweet Thursday, “I’m curious about what role the government played in Twitter’s suppression of covid policy discussion.”

“We will see with time, I suppose,” he added.

Conservative commentators Dan Bongino, a Fox News host, and radio host and conservative activist Charlie Kirk of Turning Point USA were also put on a “Search Blacklist” and slapped with a “Do Not Amplify” label, respectively.

Bongino fumed on Sean Hannity’s Fox News show Thursday night that his treatment was “some Soviet-style bulls–t” and Kirk tweeted Friday, “We’ll never know how different the country would be had they never put their thumb on the scale.”

“All of this is evil, un-American, and it should be criminal,” Kirk added.

Last week, Taibbi kicked off the series of “Twitter Files” tweets by revealing internal documents tied to the company’s suppression of The Post’s October 2020 scoop about Hunter Biden’s infamous laptop.

“They even blocked its transmission via direct message, a tool hitherto reserved for extreme cases, e.g. child pornography,” he wrote.

Taibbi said a former employee told him that “everyone knew this was f–ked.

”But the company’s “response was to essentially to err on the side of … continuing to err,” Taibbi said.






Download PDF Twitter Files Part 3 Summary PDF and here is the PDF source