A coalition of US right-wing media personalities, scientists, and tech leaders are calling for a global ban on the development of superintelligent artificial intelligence (AI) until science ensures it can be controlled safely, in tandem with the public’s support.
According to a Wednesday report by Reuters, the plea coordinated by the Future of Life Institute (FLI) was announced through a joint statement signed by more than 850 public figures.
The document is asking governments and AI companies to suspend all superintelligence work, AI systems that supposedly tower over human cognitive abilities, until publicly approved safety mechanisms are imposed.
The signatories in the coalition are led by right-wing media members Steve Bannon and Glenn Beck, alongside leading AI researchers Geoffrey Hinton and Yoshua Bengio. Other figures include Virgin Group founder Richard Branson, Apple cofounder Steve Wozniak, and former US military and political officials.
The list also features former Chairman of the Joint Chiefs of Staff Mike Mullen, former National Security Advisor Susan Rice, and the Duke and Duchess of Sussex, Prince Harry and Meghan Markle, with former President of Ireland Mary Robinson.
Renowned computer scientist Yoshua Bengio spoke about the coalition’s fears in a statement on the initiative’s website, saying AI systems may soon outperform most humans in cognitive tasks. Bengio reiterated that technology could help solve global problems, but it poses immense dangers if developed recklessly.
“To safely advance toward superintelligence, we must scientifically determine how to design AI systems that are fundamentally incapable of harming people, whether through misalignment or malicious use,” he said. “We also need to make sure the public has a much stronger say in decisions that will shape our collective future.”
The Future of Life Institute, a nonprofit founded in 2014 with early backing from Tesla CEO Elon Musk and tech investor Jaan Tallinn, is also among groups campaigning for responsible AI governance.
The organization warns that the race to build superintelligent AI or artificial superintelligence (ASI) could create irreversible risks for humanity if not properly regulated.
In its latest statement, the group noted superintelligence could lead to “human economic obsolescence, disempowerment, losses of freedom, civil liberties, dignity, and control, and national security threats and even the potential extinction of humanity.”
FLI is asking policymakers to ban superintelligence research and development fully until there is “strong public support” and “scientific consensus that such systems can be safely built and controlled.”
Tech giants are still trying to push the boundaries of AI capabilities, even though some groups are against how it has affected jobs and product development. Elon Musk’s xAI, Sam Altman’s OpenAI, and Meta are all racing to develop powerful large language models (LLMs).
In July, Meta CEO Mark Zuckerberg said during a conference that the development of superintelligent systems was “now in sight.” However, some AI experts claim the Meta CEO is using marketing tactics to scare competitors about how his company is “ahead” in a sector expected to see hundreds of billions of dollars in the coming years.
The US government and technology industry have resisted demands for moratoriums, propounding that fears of an “AI apocalypse” are vehemently exaggerated. Naysayers of a development pause say it would stifle innovation, slow economic growth, and the potential benefits AI could bring to medicine, climate science, and automation.
Yet, according to a national poll commissioned by FLI, the American public is largely in favor of stricter oversight. The survey of 2,000 adults found that three-quarters of respondents support more regulation of advanced AI, and six in ten believe that superhuman AI should not be developed until it is proven controllable.
Before becoming OpenAI’s chief executive, Sam Altman warned in a 2015 blog post that “superhuman machine intelligence is probably the greatest threat to the continued existence of humanity.”
Similarly, Elon Musk, who has simultaneously funded and fought against AI advancement, said earlier this year in Joe Rogan’s podcast that there was “only a 20% chance of annihilation” from AI surpassing human intelligence.
Join Bybit now and claim a $50 bonus in minutes