News

A Surprising Mix Of Public Figures Call For Halt To ‘Superintelligence’ AI That Could ‘Outperform All Humans’

Concerns include "losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction."

   DailyWire.com
A Surprising Mix Of Public Figures Call For Halt To ‘Superintelligence’ AI That Could ‘Outperform All Humans’
Philip Dulian/picture alliance via Getty Images

Hundreds of public figures — including tech leaders, celebrities, media personalities, and politicians — signed a statement released on Wednesday calling for an immediate pause on the development of advanced artificial intelligence, referred to as “superintelligence.”

Superintelligence technology is currently being pursued by tech giants such as Mark Zuckerberg’s Meta, Sam Altman’s OpenAI, and Elon Musk’s xAI. These companies hope to build “superintelligence in the coming decade that can significantly outperform all humans on essentially all cognitive tasks,” according to a preamble to the statement. The preamble points to concerns “ranging from human economic obsolescence and disempowerment, losses of freedom, civil liberties, dignity, and control, to national security risks and even potential human extinction.”

The “Statement on Superintelligence” itself, which now has more than 1,000 signatures, consists of just 30 words: “We call for a prohibition on the development of superintelligence, not lifted before there is broad scientific consensus that it will be done safely and controllably, and strong public buy-in.”

High-profile signers include leaders in the tech industry, such as Apple co-founder Steve Wozniak and AI pioneers Yoshua Bengio and Geoffrey Hinton.

In a statement accompanying his signature, Bengio wrote that superintelligence “could surpass most individuals across most cognitive tasks within just a few years.”

“These advances could unlock solutions to major global challenges, but they also carry significant risks,” he added. “To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use. We also need to make sure the public has a much stronger say in decisions that will shape our collective future.”

Join us now during our exclusive Deal of the Decade. Get everything for $7 a month. Not as fans. As fighters. Go to DailyWire.com/Subscribe to join now.

The “Statement on Superintelligence” was promoted by the Future of Life Institute, a group that aims “to steer transformative technologies away from extreme, large-scale risks and towards benefiting life.” Future of Life Institute President Max Tegmark said he reached out to all the CEOs of major AI developers, asking them to sign the document. However, he added that he did not expect them to endorse the statement, the Associated Press reported.

“I really empathize for them, frankly, because they’re so stuck in this race to the bottom that they just feel an irresistible pressure to keep going and not get overtaken by the other guy,” Tegmark said. “I think that’s why it’s so important to stigmatize the race to superintelligence, to the point where the U.S. government just steps in.”

Former U.S. National Security Adviser Susan Rice and former Chairman of the Joint Chiefs of Staff Adm. Mike Mullen also signed the document, as did multiple former Democratic and Republican members of Congress. Prince Harry, Duke of Sussex, and his wife, Meghan, Duchess of Sussex, were among other high-profile signers.

The statement was also signed by conservative media personalities Glenn Beck and Steve Bannon. Numerous faith leaders endorsed the statement, such as Paolo Benanti, a Papal AI advisor and Catholic priest; Johnnie Moore, the president of the Congress of Christian Leaders and a White House evangelical adviser; and Andrew T. Walker, the Associate Professor of Christian Ethics and Public Theology at The Southern Baptist Theological Seminary.

Both Altman and Musk have warned about potential major consequences to the development of advanced AI. In a blog post 10 years ago, Altman wrote, “Development of superhuman machine intelligence (SMI) is probably the greatest threat to the continued existence of humanity.” Musk has similarly discussed the risks that come along with advanced AI, saying earlier this year that he believes there’s “only a 20% chance of annihilation,” CNBC reported.

Create a free account to join the conversation!

Already have an account?

Log in

Got a tip worth investigating?

Your information could be the missing piece to an important story. Submit your tip today and make a difference.

Submit Tip
The Daily Wire   >  Read   >  A Surprising Mix Of Public Figures Call For Halt To ‘Superintelligence’ AI That Could ‘Outperform All Humans’