Technology
U.S., U.K. Announce Partnership to Safety Test AI Models
The U.K. and U.S. governments announced Monday they will work together in safety testing the most powerful artificial intelligence models. An agreement, signed by Michelle Donelan, the U.K. Secretary of State for Science, Innovation and Technology, and U.S. Secretary of Commerce Gina Raimondo, sets out a plan for collaboration between the two governments.
“I think of [the agreement] as marking the next chapter in our journey on AI safety, working hand in glove with the United States government,” Donelan told TIME in an interview at the British Embassy in Washington, D.C. on Monday. “I see the role of the United States and the U.K. as being the real driving force in what will become a network of institutes eventually.”
The U.K. and U.S. AI Safety Institutes were established just one day apart, around the inaugural AI Safety Summit hosted by the U.K. government at Bletchley Park in November 2023. While the two organizations’ cooperation was announced at the time of their creation, Donelan says that the new agreement “formalizes” and “puts meat on the bones” of that cooperation. She also said it “offers the opportunity for them—the United States government—to lean on us a little bit in the stage where they're establishing and formalizing their institute, because ours is up and running and fully functioning.”
The two AI safety testing bodies will develop a common approach to AI safety testing that involves using the same methods and underlying infrastructure, according to a news release. The bodies will look to exchange employees and share information with each other “in accordance with national laws and regulations, and contracts.” The release also stated that the institutes intend to perform a joint testing exercise on an AI model available to the public.
“The U.K. and the United States have always been clear that ensuring the safe development of AI is a shared global issue,” said Secretary Raimondo in a press release accompanying the partnership’s announcement. “Reflecting the importance of ongoing international collaboration, today’s announcement will also see both countries sharing vital information about the capabilities and risks associated with AI models and systems, as well as fundamental technical research on AI safety and security.”
Safety tests such as those being developed by the U.K. and U.S. AI Safety Institutes are set to play an important role in efforts by lawmakers and tech company executives to mitigate the risks posed by rapidly progressing AI systems. OpenAI and Anthropic, the companies behind the chatbots ChatGPT and Claude, respectively, have published detailed plans for how they expect safety tests to inform their future product development. The recently passed E.U. AI Act and U.S. President Joe Biden’s executive order on AI both require companies developing powerful AI models to disclose the results of safety tests.
Read More: Nobody Knows How to Safety-Test AI
The U.K. government under Prime Minister Rishi Sunak has played a leading role in marshaling an international response to the most powerful AI models—often referred to as “frontier AI”—convening the first AI Safety Summit and committing £100 million ($125 million) to the U.K. AI Safety Institute. The U.S., however, despite its economic might and the fact that almost all leading AI companies are based on its soil, has so far committed $10 million to the U.S. AI Safety Institute. (The National Institute of Standards and Technology, the government agency that houses the U.S. AI Safety Institute, suffers from chronic underinvestment.) Donelan rejected the suggestion that the U.S. is failing to pull its weight, arguing that the $10 million is not a fair representation of the resources being dedicated to AI across the U.S. government.
“They are investing time and energy in this agenda,” said Donelan, fresh off a meeting with Raimondo, who Donelan says “fully appreciates the need for us to work together on gripping the risks to seize the opportunities.” Donelan argues that in addition to the $10 million in funding for the U.S. AI Safety Institute, the U.S. government “is also tapping into the wealth of expertise across government that already exists.”
Despite its leadership on some aspects of AI, the U.K. government has decided not to pass legislation that would mitigate the risks from frontier AI. Donelan’s opposite number, the U.K. Labour Party’s Shadow Secretary of State for Science, Innovation and Technology Peter Kyle, has said repeatedly that a Labour government would pass laws mandating that tech companies share the results of AI safety tests with the government, rather than relying on voluntary agreements. Donelan however, says the U.K. will refrain from regulating AI in the short term to avoid curbing industry growth or passing laws that are made obsolete by technological progress.
“We don't think it would be right to rush to legislate. We've been very outspoken on that,” Donelan told TIME. “That is the area where we do diverge from the E.U. We want to be fostering innovation, we want to be getting this sector to grow in the UK.”
The memorandum commits the two countries to developing similar partnerships with other countries. Donelan says that “a number of nations are either in the process of or thinking about setting up their own institutes,” although she did not specify which. (Japan announced the establishment of its own AI Safety Institute in February.)
“AI does not respect geographical boundaries,” said Donelan. “We are going to have to work internationally on this agenda, and collaborate and share information and share expertise if we are going to really make sure that this is a force for good for mankind.”
-
Technology21h ago
Breaking up Google? What a Chrome sell-off could mean for the digital world | The Express Tribune
-
Technology1d ago
AI harm is often behind the scenes and builds over time – a legal scholar explains how the law can adapt to respond
-
Technology1d ago
Newborn planet found orbiting young star, defying planet formation timeline | The Express Tribune
-
Technology1d ago
Awkwardness can hit in any social situation – here are a philosopher’s 5 strategies to navigate it with grace
-
Technology1d ago
No need to overload your cranberry sauce with sugar this holiday season − a food scientist explains how to cook with fewer added sweeteners
-
Technology2d ago
Teslas are deadliest road vehicles despite safety features: study | The Express Tribune
-
Technology2d ago
US pushes to break up Google, calls for Chrome sell-off in major antitrust move | The Express Tribune
-
Technology2d ago
Public health surveillance, from social media to sewage, spots disease outbreaks early to stop them fast