News and Commentary

When Algorithms Turn Deadly: Five Fixes For Preserving Free Speech In A System Out Of Control

It has never been clearer that online radicalization is leading to real world consequences.

   DailyWire.com
When Algorithms Turn Deadly: Five Fixes For Preserving Free Speech In A System Out Of Control
Ignatiev. Getty Images.

From State Rep. Melissa Hortman’s murder to Luigi Mangione’s manifesto to Charlie Kirk’s assassination to whatever fresh horror will have meteor-impacted the news cycle between my typing these words and their publication, it has never been clearer that online radicalization is leading to real world consequences.

My team at US the Story last year analyzed just a handful of the most vile posts from 14 of the top full-blown hate accounts on one platform. These 14 posts showed a combined view count of over 50 million. The accounts themselves had a reach of 6,450,000,000 impressions. Engagement — the number of times people actually interacted with the content in total to like, quote, or share — crossed 448 million over the course of roughly 10 months.

These posts, their volume, and their reach create a permission structure for more and more hatred. It’s a quickening descent into violence and chaos; the more it becomes fair game to denigrate and threaten “the other group,” the more blaming and attacking that group becomes normalized.

One way of looking at this is as a network contagion issue. Debates around social-media-facilitated social contagions have raged in the culture wars for years. Young women in particular have been targeted by bots and accounts promoting anorexia, gender dysphoria, and more. Viral events such as the 2023 explosion of bin Laden’s “Letter to America” and widespread messaging encouraging suicidality caused immediate concern in the broader culture. At this point, we shouldn’t be surprised that algos designed to maximize virality for profit at any cost fuel the most toxic, contagious, and sticky topics.

Studies have consistently shown that violent crime follows a Pareto distribution, with 1% of criminals committing up to 60% of crimes. The same is true of hateful content strategically riding the algos in the service of encouraging real-world violence. Do we really want our social media platforms taken over by psychopaths in a full blown network contagion? And more important, what happens if we do?

Most contagions left unimpeded spread enough to damage and threaten the entire system. That’s why many of such cases are fueled by foreign PSYOPs.

cream_ph. Getty Images. Young hacker working in the dark room and using laptops to steal private information

cream_ph. Getty Images.

The aim of these contagions is never the stated topic at hand. They aren’t actually about trans rights or Jews or vaccinations or immigration. They’re about, first, setting Americans against one another in radicalized tribal conflicts, and, second, making these topics impossible or even dangerous to discuss productively (eliminating sane discussion, an obvious precondition for solving problems). During the Cold War for instance, the KGB provided financial and logistical support to the Black Panthers while also sending forged menacing letters from the Ku Klux Klan to black leaders in order to stoke racial tensions.

The point is to poison both sides in order to sow maximum chaos to undermine and destroy Western democracies.

It’s nearly impossible to withstand algorithmically-driven content engineered to maximize rage, fear, and anxiety in the service of keeping people glued to their screens. It is indeed a type of mind control. This level of nervous-system hacking and the deluge of toxic messaging can trigger terrible instincts and behavior, especially in young people whose prefrontal cortexes are literally not yet fully developed.

On top of hostile regime influence and outrage algorithms maximized for profit, there are also bad-faith domestic players urging America along its path of destruction. Most of these behind-the-scene manipulators exhibit the Dark Tetrad traits of sadism, narcissism, Machiavellianism, and psychopathy. These nefarious individuals are using sophisticated and furtive onboarding techniques to hijack the platforms.

Earlier this year our team recorded a live conversation on a prominent social media platform between self-avowed neo-Nazis, one of whom has a sizable account with millions of views and reach. The conversation was a strategy session about how to manipulate larger non-ideologically aligned influencer accounts to share Nazi content. The strategy entails using carefully selected words, phrasing, and arguments that may be polarizing but aren’t explicitly antisemitic or racist. They target certain specific influencers and thought leaders’ interests that are political but outside the “Nazi diet.” When another large account interacts with a seemingly benign post by these Nazi accounts it boosts that account algorithmically so the Nazi account gains followers, views, and increased power to change the culture on the platform and elsewhere. It’s the Nazi equivalent of algorithmic grooming.

The same holds true with radical leftist techniques. An April study by our partner group, NCRI, found that more than 50% of self-identified people on the left of the political spectrum believed it at least somewhat acceptable to assassinate President Trump. And a whopping 78% of BlueSky users expressed some level of support for Luigi Mangione’s cold-blooded assassination of UnitedHealthcare CEO Brian Thompson. The biggest predictor of Mangione support? BlueSky usage.

Heavy social media use is indeed the one constant. Increasingly when we dissect the social media diet and posts of those onboarded to radicalism, we are finding a toxic blend of ideologies. This is because the aim is not support for liberal or conservative politics. The aim is get as many (especially young Americans) to arrive where far left and far right meet at the horseshoe of nihilism: the No Lives Matter cohort. Once the conditions are set for, say, a young man’s radicalization, it doesn’t matter to our hostile foreign adversaries whether the next bus he boards is filled with violent trans-Antifa-types or your-body-my-choice groypers.

We have no intention of vilifying the social media platforms (aside from those purposefully running full-blown foreign PSYOPs) or undermining free speech. In our informal engagements with leaders at these platforms we have found them concerned and receptive to discussing ways to counter this. Vilifying CEOs, big tech, and the workers within these corporations not only forecloses on cooperative progress, but will likely shove them into a defensive posture. There are plenty of good people in tech leadership who have expressed a willingness to address this problem.

Outflow Designs. Getty Images. Technology icons transfer data through programming codes, artificial intelligence concept.

Outflow Designs. Getty Images.

So how might we approach this?

Here are five concrete recommendations to turn down the temperature on destructive social media trends while protecting freedom of speech and steering clear of government overreach and censorship.

1. Make algorithms transparent.

Citizens, parents, and the free market (advertisers and content producers) have a right to know if they or their children are participating in platforms feeding suicidality, hate-mongering, violent pornography, anti-American propaganda, or anything else damaging. This is not about control or censorship but about allowing people to know what they are engaging with so they can make informed choices for themselves, their companies, and their children. We aren’t making a demand that the platforms publish their proprietary trade secrets in terms of the algorithms. Rather we are suggesting to allow independent researchers access to the data required to assess algo impacts in relation to illegal activity and whether the platforms are enforcing their own rules as defined in their terms of service. That is indeed the contract with the user—and the free market.

There is a valid argument too for the right of Americans to explore fringe and even perilous ideas and viewpoints. The world is a dangerous place and neither the government nor corporations should foreclose on the rights of citizens to explore beyond the bounds of what is considered widely acceptable (excluding, of course, illicit materials). But contact with the terrible and extreme should be done knowingly, not as a result of covert algorithms that are optimized for profit and forced into your feed. Ideally dipping into dangerous arenas should be inoculating, not poisoning. 

2. Reward those on social media who identify themselves as verified users and stand behind their words in the public square.

Social media participants who use their real identities deserve to have their opinions elevated above those who post anonymously and have zero accountability. A free marketplace of ideas should privilege a modicum of courage, transparency, and responsibility.

There are certainly arguments to be made for anonymous accounts, whistleblowers being oft-cited. That’s fair enough. They can still post their information in a second tier of engagement below verified users. It just won’t be elevated to the top along with every anonymously cited conspiracy theory.

3. Differentiate freedom of speech from freedom of reach for profit.

As a vehement free speech advocate, I believe in the right for people to explore what they want and express themselves how they wish. But algorithms pushing hatred, lies, and violence are waging asymmetric war on the truth. Freedom of speech is one thing. Distortion of speech and covert manipulation of the free marketplace of ideas is another.

Sunlight is indeed the best disinfectant for bad ideas but light of day is insufficient to combat targeted deep-machine-learning computing power designed to elevate lies above all else and compounded by the machinations of psychopathic actors. This level of skew on the playing field makes a competition of ideas nearly impossible. Why hide truth at the bottom of a cesspool? A hierarchy of value and a hierarchy of information is necessary for any endeavor that hopes to produce anything beyond chaos and ruin.

4. Preserve human judgment at social media companies.

Of course humans have biases and can fall victim to corruption. But that doesn’t mean that we should eliminate human good from the decision making process. Shouldn’t we encourage and reward that which moves us toward greater truth and health? Rather than encouraging and rewarding the opposite? Every playing field needs a referee and every company should have stewards of common sense and common decency who assess the automated content moderation systems to ensure they’re not being gamed and to help enforce the agreed upon rules of the game. Corporations are certainly equipped to encourage free speech, social engagement, and network health while ensuring that the tiny percentage of psychopathic accounts are not rocket-fueled by algos to infect the entire platform.

5. Identify and combat bot swarms and inauthentic coordinated material.

Users, advertisers, and the platforms themselves should know whether they are engaging with real accounts and ideas or with manipulative bad actors and swarms of bots from a Saint Petersburg troll farm. Independent research access to the data can bring additional resources to the table when it comes to identifying these threats to users and the platforms themselves. A bigger ecosystem of transparency is to our collective benefit. 

These five concrete recommendations will go an enormous way toward tamping down chaos, lies, and sadism online and in the real world.

* * *

Gregg Hurwitz is the New York Times #1 internationally bestselling author of 26 thrillers including the Orphan X series. His novels have won numerous literary awards and have been published in 33 languages. Gregg currently serves as the Co-President of International Thriller Writers (ITW). Additionally, he’s written screenplays and television scripts for many of the major studios and networks, and is an award-winning documentary producer. Gregg also wrote comics for AWA (including the critically acclaimed anthology NewThink), DC, and Marvel, and poetry. Currently, Gregg is working against polarization in politics and culture. To that end, he’s penned dozens of Op/Eds and pieces for The Wall Street Journal, The Guardian, The Bulwark, Salon, and others, and pieces of creative content which have won numerous industry awards and achieved several hundred million views on digital TV platforms. He also helped write the opening ceremony of the 2022 World Cup.

The views expressed in this piece are those of the author and do not necessarily represent those of The Daily Wire.

Join us now during our exclusive Deal of the Decade. Get everything for $7 a month. Not as fans. As fighters. Go to DailyWire.com/Subscribe to join now.

Create Free Account

Continue reading this exclusive article and join the conversation, plus watch free videos on DW+

Already a member?

Got a tip worth investigating?

Your information could be the missing piece to an important story. Submit your tip today and make a difference.

Submit Tip
The Daily Wire   >  Read   >  When Algorithms Turn Deadly: Five Fixes For Preserving Free Speech In A System Out Of Control