X to deal with widespread Hamas-related misinformation or face EU fines
Elon Musk’s social media platform X reported that the company took down hundreds of Hamas-related accounts and thousands of misleading posts following Hamas’ deadly terrorist attack on Israel. On Tuesday, Oct. 10, the European Union sent a letter to Musk demanding that X address illegal content and misinformation about the attacks within 24 hours or face penalties under the EU’s strict Digital Services Act that was recently enacted. The penalties could include fines in the billions.
Following the terrorist attacks by Hamas against 🇮🇱, we have indications of X/Twitter being used to disseminate illegal content & disinformation in the EU.
“There is just this proliferation of misleading content, and when it is not just made up,” Ward said. “It is out of context or out of time; this video of an attack by Israeli forces on Gaza is, in fact, that, what it purports to be, but it is from May, not from this current set of attacks.”
A post reportedly seen by more than 800,000 people on X included a video of U.S. military aircraft said to be heading to Israel with equipment. The problem, according to BBC journalist Shayan Sardarizadeh, is the video was taken in 2019.
It is the most recent example of social media’s uphill battle in thwarting the dissemination of misinformation.
“The past few days have just been this vivid lesson in the fact that you literally cannot trust what social media feeds you these days,” Ward said.
X CEO Linda Yaccarino responded to the EU Wednesday, Oct. 11, saying the company has “redistributed resources and refocused internal teams who are working around the clock to address this rapidly evolving situation.”
Everyday we're reminded of our global responsibility to protect the public conversation by ensuring everyone has access to real-time information and safeguarding the platform for all our users. In response to the recent terrorist attack on Israel by Hamas, we've redistributed… https://t.co/VR2rsK0J9K
However, the sheer number of posts makes it hard for the social media company to keep up. Yaccarino wrote that the company has already moved to take down or label tens of thousands of posts that break the company’s rules on violent speech, manipulated media and graphic media.
In response to Yaccarino’s letter, the EU said it would analyze the measures X is taking and will consider next steps.
The seemingly nonstop wave of misinformation comes just a week after X announced it was scaling back the teams of people patrolling misinformation.
“X, as you mentioned, is really the locus of so much of this stuff because they have cut back so many of their teams,” Ward said on NBC. “Their elections integrity team, their trust and safety teams, all of those have been scaled back.”
X is not the only platform dealing with misinformation about the Hamas attacks. In a similar letter to Facebook’s parent company, Meta, the EU gave Mark Zuckerberg 24 hours to respond with measures that his social media platform is taking to counter the spread of misinformation about the attacks under the same Digital Services Act. The act forces major social media companies to remove illegal content from their platforms.
A Meta spokesperson said that its teams are working around the clock and teaming up with third-party fact-checkers to “limit the spread of misinformation.”
UN officials looking to crack down on spread of online disinformation
World leaders will have their work cut out for them as they gather at the United Nations General Assembly the week of Sept. 17. They will look into the climate crisis, food insecurity, the invasion of Ukraine and more.
According to officials, the U.N. is also looking to win the information war. It’s part of an effort to “massively ramp up” the response to online “disinformation.”
Melissa Fleming, undersecretary-general for Global Communications, spoke at the Nobel Prize Summit in May. She gave a speech called “Healing Our Troubled Information Ecosystem,” and said the United Nations is currently developing a code of conduct for information integrity on digital platforms.
Fleming says the goal is to slow the spread of both misinformation and disinformation, as well as hate speech on social media platforms.
According to the U.N., misinformation is the “accidental spread of inaccurate information,” and disinformation is “inaccurate information that intends to deceive and is spread in order to do serious harm.”
“So, we’re creating at the U.N. a central capacity to monitor and also have the ability to rapidly react when mis- and disinformation and hate speech is not just threatening our people, our operations, but also, the issues and the causes that we’re working on,” Fleming said. “But also, we’re going to be gearing up our verified initiative around climate change and developing this U.N. code of conduct on information integrity and digital platforms hoping to set global standards that we can all advocate around so we can collectively work for a more humane internet.”
According to Fleming, since Elon Musk acquired X — formerly known as Twitter — and fired hundreds of employees, there is no way for U.N. officials to call someone for help at X in flagging content that’s misleading, hateful, abusive or inciting violence.
“We’re now seeing that almost anyone can promote disinformation for the price of a blue tick,” Fleming said.
Without content moderators, X developed public Community Notes with the goal of adding context to potentially misleading posts.
Fleming says the U.N. has teamed up with the platform to elevate reliable information around COVID-19 and climate to amplify what they call “trusted messengers.”
At the end of her speech, Fleming said the U.N. is looking to “join forces and heal our information ecosystem,” which they’re planning to do through the new code of conduct.
Last year, the General Assembly and the Human Rights Council called on the U.N. to respond to the spread of online disinformation.
In August of 2022, the U.N. Secretary-General Antonio Guterres released his Countering Disinformation report, which states the General Assembly is concerned over disinformation on social media platforms, and welcomes efforts “to promote international cooperation in countering disinformation… for the promotion and protection of human rights and fundamental freedoms.”
Back in February, Guterres said they’re calling on action from tech companies and governments to help stop the spread of misinformation and disinformation online.
Musk responded to Guterres on X saying, “The U.N. is more likely to cause, rather than prevent, disinformation.”
As of right now, there’s no enforcement in place and it is unknown what action U.N. officials plan to take.
Monday, Sept. 18, Guterres and other U.N. officials asked for input in developing code of conduct for information on digital platforms that “seeks to provide a concerted global response to information threats that is firmly rooted in human rights, including the rights to freedom of expression and opinion and access to information.”
People can submit their opinions and suggestions on the U.N. website until Dec. 1, 2023.
Months after the government officially ended the public health emergency related to the COVID-19 pandemic, Big Tech is still monitoring posts mentioning COVID. Instagram’s messaging app Threads is blocking search results for key COVID-related terms, including “COVID,” “long COVID” and “vaccines.”
Results for those searches will come up as blocked and direct users to the Center for Disease Control and Prevention’s website.
According to Meta, Facebook and Instagram’s parent company, Threads will intentionally censor search results for terms that could show sensitive content.
Searches for words like “nude,” “porn,” or “sex” will show up as blocked, and now “vaccine,” “vaccination” and “coronavirus” have been grouped in that same category.
This new search function was rolled out during the week of Sept. 3. Meta says it is refining the tool and will update its “search functionality” once the company is confident in the quality of the results.
The tool has been met with criticism, including from some health workers who argue that the site is taking away a resource of information as new COVID variants are starting to circulate.
“Long COVID” — one of the phrases being blocked in Thread’s searches — was originally discovered by people sharing symptoms online. The term was first coined by a user on Twitter — now known as X — before being used by the CDC.
With the new search feature, users will not be able to read through posts that mention the word.
Meta has decided that the risk of disinformation outweighs the benefit of any information at this time.
This is another element in part of a larger debate that Straight Arrow News has been covering on whether social media’s censorship of COVID posts is protecting Americans from false information or encroaching on the principle of free speech.
Americans say media bias divides the nation. This chart shows most polarizing.
For the first time in recorded history, more Americans report having no trust in mass media reporting the news fully, accurately and fairly than those who trust media a fair or great amount. The results from the most recent Gallup poll on the topic show overall trust in media remains near a record low.
Meanwhile, nearly 3 in 4 Americans believe the media is doing more to increase polarization in the country, according to a 2023 AP-NORC poll.
Media bias is a prevailing concern for news consumers. Half of Americans reported feeling that most national news organizations intend to mislead, misinform or persuade the public, Gallup and Knight Foundation found.
It’s no surprise, then, that Americans are increasingly searching for unbiased news sources. Google searches spike around elections, with the most interest in history recorded in November 2020.
“Americans recognize the problem,” said Vanessa Otero, CEO of Ad Fontes Media and creator of the Media Bias Chart. “They realize when there’s this flood of information that they have to sort through, and unfortunately, because there are so many news sources, the responsibility falls on folks to sort through them.”
Third-party media bias evaluators like Ad Fontes Media aim to help news consumers navigate the increasingly crowded and complex media terrain using a simple media bias chart.
Source: Ad Fontes.
While analyzing bias can also be inherently biased, Ad Fontes Media said it combats that by using a politically balanced group of analysts to determine where media organizations score on its chart. That means each piece of news is rated by analysts who identify as right-leaning, left-leaning and center.
Straight Arrow News spoke to Ad Fontes Media CEO Vanessa Otero about America’s media bias problem and how it’s affecting society.
This interview has been edited for length and clarity.
Q: How have you seen the media bias landscape change since you started analyzing this in 2016?
A: It’s a great question. It’s grown. There are just more and more and more news and information sources out there every day. Unfortunately, one way it hasn’t changed is that it’s still quite polarizing. When I started the Media Bias Chart in 2016, the impetus of it was really the fact that there were so many polarizing news sources and people were fighting over them. Unfortunately, that’s still the case, but there’s just more to navigate.
Q: Would you say there are more hyper-partisan options now or more options in the middle?
A: I think there’s more of both. There are a lot of news sources that are in the middle. Most of the news sources we rate tend to be less biased, but the ones that are on the far left and right are the loudest and they seem to get most of the attention.
So there’s really more of both. But because there are so many news sources to choose from, people have to be really intentional about seeking out things that are minimally biased and highly reliable.
Q: What does that tell us about Americans’ news consumption if the ones on the edges are getting more viewership and attention?
A: Look, folks are polarized. Every poll that you look at shows increasing and increasing polarization over the last 10 years, 20 years, 30 years. And not just polarization but what’s called affective polarization: the kind where it makes you dislike, distrust, or even hate folks on the other side.
That, I think, is really damaging and dangerous. When you’re not assuming the folks on the other side have good intentions; when you’re just assuming that they disagree with you because they’re stupid or evil or both, it’s really hard to come to any kind of consensus.
But things that you agree with tend to give you this confirmation bias. We’re programmed to like things that we already agree with. So what it tells you about Americans is that we can fall into those traps really easily.
It’s not hopeless, though. We can do something about it. We can recognize that something that we’re strongly agreeing with is just feeding our confirmation bias; that maybe that’s not the most effective way to make decisions or live your life. If politics is dividing your family and your friends, you’re not able to have conversations with folks, it may be an indication that you’re focusing on very polarizing news sources.
Q: This is going to be a hard question to answer. It’s sort of a ‘chicken versus the egg’ argument. Do you think that more media bias is increasing political division or do you think it’s the other way around?
A: Great question. The causes of polarization are many. There are constitutional causes because you can have politicians that drive polarization. But the media is a big part of it.
There are many stakeholders in our democracy – the citizenry, politicians, the media – and each influences each other in a push-and-pull kind of way.
One thing we’ve seen since the advent of cable news is the participation of politicians in media. How often do you look at a cable news show and politicians are on the cable news show? This happens all night, every night, every channel.
So politicians are part of the media and politicians, just like every other citizen, are susceptible to confirmation bias; to being wrapped up in their own side’s stuff and not being able to filter out highly reliable versus low-reliable information.
I don’t know if you notice, but there are some politicians that are not very good at telling what’s true in the news and they participate in the extreme bias of the news. So I think it’s really a combination of both where they feed off of each other.
Q: CNN says it’s unbiased. Fox dropped the ‘Fair and Balanced’ slogan but says it’s the most trusted. What does your analysis say?
A: It’s funny, what you say as a slogan and what you actually do can be different things. Also, how people perceive you can be really different things.
One of the most common ways of measuring media bias is consumer opinion polling: asking people, ‘How much do you trust Fox or CNN or MSNBC?’ And really, the answer to that question tells you so much more about the person and their politics than it does about the news source.
The way we go about it is by analyzing the content, which is hard to do. It’s a little bit easier to poll a bunch of folks and say, ‘What do you think about this?’ But the content itself has the answers.
You can look at the headlines, the graphics and each individual sentence. You can see how they’re expressed as fact, analysis and opinion, and you can fact-check the claims that are in there. You can see if they advocate for left or right political positions. You can see how they refer to political issues or opponents. So you can actually tell from looking at the content.
What our data shows is that Fox is right-leaning, CNN is left-leaning, and MSNBC is left-leaning, a little bit more so than CNN. And they have varying levels of reliability. What’s really fascinating is that internet content – like CNN.com, FoxNews.com, or MSNBC.com – tends to be less biased and more reliable than their TV counterparts, which have a lot of primetime opinion programming. And opinion programming really isn’t news.
Q: It is our belief that Americans are hungry for unbiased news. That is why Straight Arrow News is here. How is Straight Arrow News doing in that mission, according to your analysis?
A: Really well. I think Straight Arrow News is correct in the assumption that people are looking for unbiased news. Of course, everyone has some bias, right? It’s really hard to be unbiased. But you can mitigate your bias as best you can by trying, by showing balance and by describing things as straightforwardly as possible.
We’ve seen so many Americans say, ‘Yes, we want unbiased news. Can I just go turn something on where I’m not being told what to think? Or I’m just getting the facts?’ People say that all the time. It’s one of the reasons you see this level of trust declining in those opinion polls. People don’t trust news when it’s more full of opinion and analysis than actual fact reporting.
Straight Arrow News is right in the middle for bias on our chart. That middle section is labeled minimal or balanced bias. And it’s rated as highly reliable. That’s what we’re looking for in the media landscape. That’s what we want. We want to promote the work of good journalists who are bringing folks facts that they wouldn’t otherwise be able to find on their own.
Q: In the history of Google search trends, the search for unbiased news sources was never higher than in November 2020. What does that tell us about what Americans are looking for as they go to the polls?
A: So much. We actually did some studies around social media and monitoring the spread of biased and unreliable information before, during and after the November 2020 election. There was a marked increase in misinformation and polarizing content being shared across the board.
People have a need and desire to be well-informed. They don’t want to feel lost. You don’t want to go around not knowing what’s true. It’s very disorienting. It’s very unsettling.
What that tells me is that Americans recognize the problem. Americans are smart. They realize when there’s this flood of information that they have to sort through and unfortunately, because there are so many news sources, the responsibility falls on folks to sort through them.
That’s why we exist because no one has time to sort through tens of thousands of information sources. So we’re a reference point. Ultimately, people should make the determinations for themselves and learn to recognize reliability and bias. But when there are just so many out there, it’s really helpful to have a guidepost. And so we just hope we can be that for folks.
Musk sues anti-hate speech nonprofit for alleged false claims, illegal site scraping
Elon Musk’s X Corp. is suing a nonprofit that fights hate speech and disinformation. The social media platform X, formerly known as Twitter, is accusing the Center for Countering Digital Hate (CCDH) of asserting false claims and encouraging advertisers to stop investing in the company.
In July, CCDH published research that stated hate speech had increased toward minority communities on the platform. One report said Twitter took no action against 99% of Twitter Blue accounts the center reported for tweeting hate.
X and CEO Linda Yaccarino called the report “false.” They’re accusing the CCDH of falsely describing X as “overwhelmed with harmful content.” X says the research was based on outdated metrics that were gathered shortly after Twitter’s acquisition.
In the lawsuit, filed July 31 in San Francisco federal court, X Corp. claims the CCDH gained access to its data without authorization, illegally scraped data from its platform and used scare tactics to drive away advertisers.
Advertisers began fleeing the platform last year after Musk bought it for $44 billion. Musk has admitted that advertising revenue is down more than 50% since October.
In a letter sent July 31, Musk threatened to sue the nonprofit for unspecified damages. In response to Musk’s letter, lawyers for the CCDH accused X of intimidation and said the allegations waged against the center had no factual basis.
Expert warns AI could be used to help commit a biological attack
Lawmakers on both of the sides of the aisle have raised yet another potential danger of artificial intelligence – the possibility of AI being used to create a biological attack. Dario Amodei, the CEO of the AI company Anthropic, testified in front of the Senate Judiciary Committee on Tuesday, July 25. He said AI could help otherwise unskilled actors develop biological weapons that could be used in an attack.
“Today, certain steps in bioweapons production involve knowledge that can’t be found on Google or in textbooks and requires a high level of specialized expertise, this being one of the things that currently keeps us safe from attacks,” Amodei testified Tuesday. “We found that today’s AI tools can fill in some of these steps albeit incompletely and unreliably. In other words, they’re showing the first nascent signs of danger.”
According to Straight Arrow News’ exclusive Media Miss™ tool, this story has only been covered by sources in the center and on the right. This, despite Democrats joining Republicans in sounding the alarm at the hearing.
“The dangers are not just extinction, but loss of jobs, one of potentially the worst nightmares we have,” Sen. Richard Blumenthal (D-CT) said at the hearing. “Each day these issues are more common, more serious and more difficult to solve, and we can’t repeat the mistakes that we made on social media, which was to delay and disregard the dangers.”
Also testifying at the hearing was Stuart Russell, a professor of computer science at the University of California. He warned that AI could be used for disinformation.
“With these systems we can present to the system a great deal of information about an individual – everything that they have ever written or published on Twitter or Facebook, their social media presence, their floor speeches, and train the system to generate a disinformation campaign, particularly for that person,” Russell testified.