Harry and Meghan Join AI Pioneers in Demanding Prohibition on Superintelligent Systems

The Duke and Duchess of Sussex have teamed up with artificial intelligence pioneers and Nobel Prize winners to push for a complete ban on developing superintelligent AI systems.

Harry and Meghan are among the signatories of a influential declaration that demands “a ban on the development of artificial superintelligence”. Artificial superintelligence (ASI) refers to AI systems that would surpass human cognitive abilities in all cognitive tasks, though this technology remain theoretical.

Primary Requirements in the Declaration

The statement insists that the prohibition should stay active until there is “broad scientific consensus” on developing ASI “with proper safeguards” and once “strong public buy-in” has been secured.

Prominent figures who added their signatures include AI pioneer and Nobel Prize recipient a leading AI researcher, along with his colleague and pioneer of modern AI, Yoshua Bengio; Apple co-founder a Silicon Valley legend; UK entrepreneur Richard Branson; former US national security adviser; ex-head of state Mary Robinson, and British author a public intellectual. Other Nobel laureates who signed include Beatrice Fihn, a physics Nobelist, John C Mather, and Daron Acemoğlu.

Organizational Background

The declaration, aimed at governments, technology companies and policy makers, was organized by the FLI organization, a American AI ethics organization that earlier demanded a hiatus in developing powerful AI systems in recent years, shortly after the launch of conversational AI made artificial intelligence a worldwide public talking point.

Tech Sector Views

In recent months, Meta's CEO, the leader of Facebook parent Meta, one of the major AI developers in the US, stated that advancement toward superintelligent AI was “now in sight”. Nevertheless, some analysts have argued that discussions about superintelligence reflects market competition among technology firms investing enormous sums on AI recently, rather than the sector being close to achieving any scientific advancements.

Possible Dangers

Nonetheless, FLI warns that the possibility of ASI being developed “in the coming decade” presents numerous risks ranging from replacing human workers to erosion of personal freedoms, leaving nations to security threats and even endangering mankind with extinction. Existential fears about artificial intelligence focus on the possible capability of a AI system to escape human oversight and safety guidelines and trigger actions contrary to human interests.

Citizen Sentiment

The institute published a American survey showing that approximately three-quarters of US citizens want strong oversight on advanced AI, with six out of 10 thinking that superhuman AI should not be created until it is demonstrated to be secure or controllable. The survey of 2,000 US adults added that only a small fraction supported the current situation of fast, unregulated development.

Industry Objectives

The leading AI companies in the United States, including the ChatGPT developer OpenAI and the search giant, have made the development of artificial general intelligence – the hypothetical condition where artificial intelligence equals human cognitive capability at many intellectual activities – an explicit goal of their research. While this is one notch below superintelligence, some experts also warn it could pose an extinction threat by, for instance, being able to improve itself toward achieving superintelligence, while also carrying an implicit threat for the contemporary workforce.

Katelyn Mason
Katelyn Mason

A passionate traveler and writer sharing experiences from over 30 countries, focusing on sustainable and immersive journeys.