skip to Main Content
Hand shake, making AI technology safer

7 AI Firms Respond to Risk Warnings With Safety Promises

US President Joe Biden announced on Friday that seven leading AI companies have made voluntary commitments to implement measures aimed at making AI technology safer.

The companies – Alphabet, Meta, Amazon, Anthropic, Inflection, OpenAI and Microsoft – agreed to take steps such as thoroughly testing AI systems before release and developing ways to make clear when content has been generated by AI.

At a White House event with executives from the firms, Biden said the commitments were “a promising step”, but more work was needed to address threats posed by emerging technologies.

“We must be clear-eyed and vigilant about the threats from emerging technologies to our democracy and our values,” he said.

The pace of development in AI tools like chatbots has prompted fears over misinformation and the destabilisation of society. The capabilities demonstrated by chatbot ChatGPT since its launch last year have been described as marking a new era for AI.

Testing and transparency

The companies pledged to thoroughly security test AI systems before release and share information to help reduce risks. There was also a commitment to invest in cybersecurity as AI capabilities advance.

In a key measure aimed at tackling misinformation, the firms said they would work on developing ways to watermark all AI-generated content.

“This watermark, embedded in the content in a technical manner, presumably will make it easier for users to spot deep-fake images or audios that may, for example, show violence that has not occurred, create a better scam or distort a photo of a politician to put the person in an unflattering light,” said an industry analyst.

However, precisely how the watermark will be made evident as AI content spreads online is still being determined.

The companies plan to publicly report on AI capabilities and limitations regularly. The goal is for people to easily know when AI systems have created online material.

Regulation and responsible development

The voluntary commitments are seen as a success for the Biden administration’s moves to regulate AI technology as investment and use continues to grow rapidly.

The US currently lags behind Europe on AI regulation. In June, EU politicians agreed to draft rules under which systems like ChatGPT would have to meet obligations such as disclosing when content is AI-generated.

AI developers have warned of the need to ensure the technology is free from bias and not used in discriminatory ways against vulnerable groups.

Biden said his administration is also working on an executive order and bipartisan legislation focused on responsible development of AI. The White House said it would collaborate with allies to establish an international governance framework.

Beyond content creation, the companies highlighted intentions to direct AI capabilities at challenges such as medical research and climate change mitigation.

“The hype has somewhat run ahead of the technology,” said Meta’s Nick Clegg, striking a note of caution amid the rapid pace of advancement.

While describing the voluntary commitments as promising, Biden emphasised AI would bring profound changes requiring continued collective vigilance.

“We’ll see more technology change in the next 10 years, or even in the next few years than we’ve seen in the last 50 years,” he remarked.

Rebecca Taylor

Rebecca is our AI news writer. A graduate of Leeds University with an International Journalism MA, she possesses a keen eye for the latest AI developments. Rebecca’s passion for AI, and with her journalistic expertise, brings insightful news stories for our readers.

Recent AI News Articles
Amazon - Anthropic
Back To Top