In a widely reported interview last week, Twitter CEO Jack Dorsey declared that his company could no longer “afford to take a neutral stance anymore” when it comes to hot button social and political issues. Pressed on why his site seems to laser focus on conservatives while largely ignoring voices on the Left — like radical racist and anti-Semite Louis Farrakhan — Dorsey cited the case of Canadian progressive feminist Megan Murphy, whom his company recently banned after a series of tweets arguing that transwomen are not women.
On Monday, Murphy filed a lawsuit against Dorsey’s company for banning her from the platform.
“Men are not women,” Murphy tweeted before being permanently banned from Twitter in November. “How are transwomen not men? What is the difference between men and transwomen?” When Murphy referred to a transgender activist using biologically correct pronouns, her account was permanently locked in November.
The company’s rationale for banning Murphy: She had allegedly violated its “hateful-conduct” rules, which “prohibit targeting individuals with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category” (full text of policy below). As National Review‘s Mairead McArdle points out, Twitter added a new wrinkle to its rules in late October, which they then retroactively applied to some of Murphy’s past posts: “targeted misgendering or deadnaming of transgender individuals”; in other words, calling a biological male who identifies as female a “he” or using transgender individuals’ birth names rather than their new names.
After her account was initially locked, Murphy deleted her past tweets pushing back on the notion that men can become women. But her account was locked again after she decried the rules as “bullsh**” and the “dogma” behind them “insane.” A few days later, Murphy was permanently banned after she referred to transgender activist Jonathan Yaniv, a biological male who identifies as a female, using a male pronoun.
In a press release Monday, Murphy’s legal team explains that she’s “fighting back against silencing and censorship in the gender identity debate” and provides more details on the final incident that prompted her permanent ban:
Meghan Murphy is an independent Canadian feminist writer and journalist. She is the founder editor and publisher of Feminist Current, Canada’s leading feminist website. Meghan is a well-known and well-respected writer on feminist issues, and spent her career fighting violence against women. Her work has appeared in numerous publications across the globe and she and holds an M.A. in Gender, Sexuality and Women’s Studies from Simon Fraser University.
Now, she is being silenced.
On November 23, 2018, Twitter permanently suspended her account. The reason? She referred to a trans-identified male (Jonathan Yaniv) using a male pronoun—even though this individual continues to identify himself using his male name on multiple social media platforms—including on Twitter, as well as in the Google review Murphy shared as part of her tweet. Yaniv had previously filed multiple well-publicized human rights complaints against female estheticians who refused to give him a Brazilian bikini wax. Yaniv publicly bragged that he was personally responsible for having Murphy banned from Twitter.
On Monday, Murphy filed the lawsuit in state court in San Francisco County, where Twitter’s headquarters is located. As the press release explains, her suit accuses Twitter of false advertising as well as “secretive,” deliberately deceptive practices regarding user policy:
Twitter grew to prominence by advertising itself as “the free speech wing of the free speech party.” It repeatedly promised its users in its Terms of Service and elsewhere that it would not censor their speech. Its Terms of Service state that any changes “Will not be retroactive,” and that it will provide 30 days’ notice to users of any changes. But Twitter inserted a highly controversial new policy against “misgendering or deadnaming” transgender individuals without providing notice to anyone—a clear violation of its promises to users. Twitter’s roll-out of the policy was so secretive that the exact date that the new policy was added has never been confirmed, by Twitter or anyone else.
Murphy’s team ends the release by presenting her case as an attempt to fight for all those voices who have been “silenced by social media censorship.”
“The big tech giants are counting on users to quietly accept their bans and not stand up for their rights. But Murphy is fighting back against the attempts of powerful social media conglomerates to silence her and millions of others. She has filed a lawsuit on behalf of everyone who has had their voices silenced by social media censorship,” the release reads, linking to a site where supporters can donate.
Below is the full text of Twitter’s “Hateful Conduct Policy“:
Hateful Conduct Policy
Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.
Hateful imagery and display names: You may not use hateful images or symbols in your profile image or profile header. You also may not use your username, display name, or profile bio to engage in abusive behavior, such as targeted harassment or expressing hate towards a person, group, or protected category.
Twitter’s mission is to give everyone the power to create and share ideas and information, and to express their opinions and beliefs without barriers. Free expression is a human right – we believe that everyone has a voice, and the right to use it. Our role is to serve the public conversation, which requires representation of a diverse range of perspectives.
We recognise that if people experience abuse on Twitter, it can jeopardize their ability to express themselves. Research has shown that some groups of people are disproportionately targeted with abuse online. This includes; women, people of color, lesbian, gay, bisexual, transgender, queer, intersex, asexual individuals, marginalized and historically underrepresented communities. For those who identity with multiple underrepresented groups, abuse may be more common, more severe in nature and have a higher impact on those targeted.
We are committed to combating abuse motivated by hatred, prejudice or intolerance, particularly abuse that seeks to silence the voices of those who have been historically marginalized. For this reason, we prohibit behavior that targets individuals with abuse based on protected category.
If you see something on Twitter that you believe violates our hateful conduct policy, please report it to us.
When This Applies
We will review and take action against reports of accounts targeting an individual or group of people with any of the following behavior, whether within Tweets or Direct Messages.
We prohibit content that makes violent threats against an identifiable target. Violent threats are declarative statements of intent to inflict injuries that would result in serious and lasting bodily harm, where an individual could die or be significantly injured, e.g., “I will kill you”.
Note: we have a zero tolerance policy against violent threats. Those deemed to be sharing violent threats will face immediate and permanent suspension of their account.
Wishing, hoping or calling for serious harm on a person or group of people
We prohibit content that wishes, hopes, promotes, or expresses a desire for death, serious and lasting bodily harm, or serious disease against an entire protected category and/or individuals who may be members of that category. This includes, but is not limited to:
- Hoping that someone dies as a result of a serious disease, e.g., “I hope you get cancer and die.”
- Wishing for someone to fall victim to a serious accident, e.g., “I wish that you would get run over by a car next time you run your mouth.”
- Saying that a group of individuals deserve serious physical injury, e.g., “If this group of protesters don’t shut up, they deserve to be shot.”
References to mass murder, violent events, or specific means of violence where protected groups have been the primary targets or victims
We prohibit targeting individuals with content that references forms of violence or violent events where a protected category was the primary target or victims, where the intent is to harass. This includes, but is not limited to sending someone:
- media that depicts victims of the Holocaust;
- media that depicts lynchings.
Inciting fear about a protected category
We prohibit targeting individuals with content intended to incite fear or spread fearful stereotypes about a protected category, including asserting that members of a protected category are more likely to take part in dangerous or illegal activities, e.g., “all [religious group] are terrorists”.
Repeated and/or non-consensual slurs, epithets, racist and sexist tropes, or other content that degrades someone
We prohibit targeting individuals with repeated slurs, tropes or other content that intends to dehumanize, degrade or reinforce negative or harmful stereotypes about a protected category. This includes targeted misgendering or deadnaming of transgender individuals.
We also prohibit the dehumanization of a group of people based on their religion. More information on this policy can be found here.
We consider hateful imagery to be logos, symbols, or images whose purpose is to promote hostility and malice against others based on their race, religion, disability, sexual orientation, gender identity or ethnicity/national origin. Some examples of hateful imagery include, but are not limited to:
- symbols historically associated with hate groups, e.g., the Nazi swastika;
- images depicting others as less than human, or altered to include hateful symbols, e.g., altering images of individuals to include animalistic features; or
- images altered to include hateful symbols or references to a mass murder that targeted a protected category, e.g., manipulating images of individuals to include yellow Star of David badges, in reference to the Holocaust.
Media depicting hateful imagery is not permitted within live video, account bio, profile or header images. All other instances must be marked as sensitive media. Additionally, sending an individual unsolicited hateful imagery is a violation of our abusive behavior policy.
Do I need to be the target of this content for it to be a violation of the Twitter Rules?
Some Tweets may appear to be hateful when viewed in isolation, but may not be when viewed in the context of a larger conversation. For example, members of a protected category may refer to each other using terms that are typically considered as slurs. When used consensually, the intent behind these terms is not abusive, but a means to reclaim terms that were historically used to demean individuals.
When we review this type of content, it may not be clear whether the intention is to abuse an individual on the basis of their protected status, or if it is part of a consensual conversation. To help our teams understand the context, we sometimes need to hear directly from the person being targeted to ensure that we have the information needed prior to taking any enforcement action.
Note: individuals do not need to be member of a specific protected category for us to take action. We will never ask people to prove or disprove membership in any protected category and we will not investigate this information.
Under this policy, we take action against behavior that targets individuals or an entire protected category with hateful conduct, as described above. Targeting can happen in a number of ways, for example, mentions, including a photo of an individual, referring to someone by their full name, etc.
When determining the penalty for violating this policy, we consider a number of factors including, but not limited to the severity of the violation and an individual’s previous record of rule violations. For example, we may ask someone to remove the violating content and serve a period of time in read-only mode before they can Tweet again. Subsequent violations will lead to longer read-only periods and may eventually result in permanent account suspension. If an account is engaging primarily in abusive behavior, or is deemed to have shared a violent threat, we will permanently suspend the account upon initial review.
Learn more about our range of enforcement options.
If someone believes their account was suspended in error, they can submit an appeal.