April 22, 2025

A member of the government’s AI Council has expressed the view that certain powerful artificial general intelligence (AGI) systems may eventually need to be banned. Marc Warner, who is also the CEO of Faculty AI, emphasized the importance of strong transparency, audit requirements, and robust safety technology for AGI. Warner stated that the next six months to a year would require prudent decision-making regarding AGI. These remarks come in the wake of the EU and US jointly calling for the establishment of a voluntary code of practice for AI.

The AI Council is an independent expert committee that offers advice to the government and leaders in the field of artificial intelligence. Faculty AI, as OpenAI’s sole technical partner for implementing ChatGPT and other products, has developed tools that aided in predicting the demand for NHS services during the pandemic. However, the company’s political connections have attracted scrutiny.

Mr. Warner joined a Center for AI Safety warning about the potential for AI to lead to humanity’s extinction. Faculty AI, along with other technology companies, engaged in discussions with Technology Minister Chloe Smith at Downing Street, exploring the risks, opportunities, and regulations necessary for ensuring safe and responsible AI.

While “narrow AI” systems used for specific tasks can be regulated similarly to existing technology, Warner expressed greater concern about AGI systems, which are fundamentally novel and encompass a broad range of tasks. He emphasized the need for different rules in dealing with AGI, which aims to match or surpass human intelligence across various domains.

Warner argued that as humans hold a position of primacy on Earth primarily due to their intelligence, creating objects that are as intelligent or even more intelligent than humans poses significant risks that need to be approached cautiously. He suggested implementing strong limits on the computational power used by such systems and even raised the possibility of banning algorithms above a certain level of complexity or computational capacity, emphasizing that such decisions should be made by governments rather than technology companies.

While some argue that concerns about AGI distract from existing technology problems like bias in AI recruitment or facial recognition tools, Warner compared it to questioning the need for safety in both cars and airplanes, emphasizing the importance of addressing both concerns.

Regarding regulation, some worry that excessive regulation could deter investment and stifle innovation in the UK. However, Warner believes that encouraging safety measures could actually provide a competitive advantage for the country. He expressed his belief that for the technology to deliver value, safety measures are necessary, drawing a parallel to the importance of functioning engines for the successful operation of airplanes.

The recent UK White Paper on AI regulation received criticism for not establishing a dedicated watchdog. Nonetheless, Prime Minister Rishi Sunak highlighted the need for “guardrails” and stated that the UK could assume a leadership role in this area. US Secretary of State Antony Blinken and EU Commissioner Margrethe Vestager also emphasized the need for swift voluntary rules. While the EU’s Artificial Intelligence Act, which will be one of the first regulatory frameworks for AI, is still undergoing legislative processes, Vestager acknowledged the challenge of aligning the legislative timeline with the rapid technological acceleration. However, industry and other stakeholders will be invited to contribute to a draft voluntary code of conduct in the coming weeks. Blinken emphasized the importance of establishing voluntary codes open to a wide range of like-minded countries.

About The Author

Leave a Reply

Your email address will not be published. Required fields are marked *