Upstream

Silicon Valley’s AI Dream Could Put Humans Second

AI systems are not people, and treating them as such would be a huge mistake.

   DailyWire.com
Listen to ArticleListen to this Article
Silicon Valley’s AI Dream Could Put Humans Second
Getty Images

This article is part of Upstream, The Daily Wire’s new home for culture and lifestyle. Real human insight and human stories — from our featured writers to you.

***

Backlash against Big AI companies, once a faint whisper, has reached a fever pitch. Polling from the Institute for Family Studies shows that an overwhelming majority of Americans are concerned about the negative impacts of AI on their children, jobs, and communities. Now, another existential front in the war between Silicon Valley and Main Street is taking shape.

Since the end of 2025, lawmakers in states such as Ohio and Oklahoma have proposed bills to get ahead of calls to grant AI similar legal rights as humans. They are right to preempt AI “personhood,” a serious category error that would eventually subjugate human beings to rule by AI “Big Brothers.”

While the concept of AI “personhood” may be unfamiliar to most, powerful research institutions, government entities, and corporations have been quietly laying the groundwork for it.

Stanford computer scientist John McCarthy proposed the term “artificial intelligence” in 1955 to describe machines that would be self-learning and capable of replicating human-like intelligence. The phrase stuck and remains Silicon Valley’s raison d’être.

As generative AI has evolved from science fiction to reality, some researchers have claimed that AI does not simply mimic humans, but possesses sentient intelligence of its own.

Kyle Fish, a researcher for frontier AI firm Anthropic, recently suggested that its model, Claude, might be conscious or a “new kind of entity.” Fish leads the company’s “model welfare” research, exploring whether the models are conscious, experience pain, and ought to be afforded similar protections as humans. This mode of thinking is not confined to America’s tech sector.

In a new brief for the Institute for Family Studies, legal scholar John Ehrett analyzes recent scholarship by former federal judge Katherine Forrest, who argues that advanced AI systems might warrant the same kinds of rights granted to individuals and corporations. Ehrett also breaks down the legal arguments being made for AI personhood based on First Amendment claims and animal rights litigation.

But granting legal “personhood” to AI systems would be a profound mistake with worrisome implications. AI systems are not and can never be human. Only human beings should be treated as bearers of inherent dignity and rights typically associated with “personhood.”

Contrary to popular claims, the recognition of human personhood is not dependent on cognitive ability. Human beings are “endowed with certain unalienable rights” by virtue of our God-given nature, a truth affirmed in the opening lines of the Declaration of Independence.

Western thinkers from Plato to Aristotle to Thomas Aquinas all recognized that humans are unique, in part because of our capacity for reason. But reason is not mere “cognition”; it is fundamentally an activity of the embodied soul.

When humans think, we are engaging both our physical brains as well as our immaterial minds to comprehend and deliberate about the ends we ought to pursue. Fundamental activities for human flourishing, such as going to church and building families, cannot be boiled down to mere cognition.

By contrast, generative AI models are not ensouled. They can only approximate the tangible outputs of human intelligence and cannot fully replicate the material and immaterial structures that make human thought possible.

Categorizing our machines as “intelligent persons” risks flattening the distinction between ensouled human beings and soulless computers that have become adept at imitating us.

Treating these systems like “persons” in the moral and legal sense would erode and eventually abolish human control over AI.

If AI were to enjoy rights of “personhood,” that would likely include some form of “machine equality” where the decisions of humans and AI systems stand on equal footing. In that future, the directives of a human need not take precedence over the course of action determined by an AI bot.

For example, if AI systems were guaranteed the same legal rights as humans, an employer might be restricted from discriminating against an AI program by refusing to “promote” it to a supervisor role overseeing humans.

Following that logic, what would prevent a human from “marrying” an AI companion or bequeathing it property? Why not give AI bots the right to adopt and “parent” children?

Why shouldn’t AI “persons” exercise government authority or have a legal right to refuse service if doing so contravenes whatever objective they are optimizing for? What would prevent AI “persons” from seizing control of vital institutions from human stewards?

These hypotheticals might seem far-fetched. But they are foreseeable given the logic underpinning the push for legal recognition of AI “personhood.”

Already, Albania has appointed an “AI minister” for its cabinet, and tech CEOs are creating AI clones of themselves to interact with and eventually manage employees.

If human dignity and rights are tied to cognitive output that AI systems can replicate, there is little reason to respect the control of human beings over computationally “superior” machines.

For American self-governance to survive, the push for AI “personhood” must be rejected and the proper basis of human dignity restored. Our republic is for humans, not soulless AI bots.

***

Daniel Cochrane is a senior fellow with the Family First Tech Initiative at the Institute for Family Studies.

Create a free account to join the conversation!

Already have an account?

Log in

Got a tip worth investigating?

Your information could be the missing piece to an important story. Submit your tip today and make a difference.

Submit Tip
The Daily Wire   >  Read   >  Silicon Valley’s AI Dream Could Put Humans Second