Sunday, April 20, 2025

UK AI Safety Institute ventures across the pond with new US location



The UK’s AI Security Institute is ready to develop internationally with a brand new location in the US.

On Might 20, Michelle Donelan, the U.Okay. Know-how Secretary, introduced that the institute will open its first abroad workplace in San Francisco in the summer time.

The announcement stated that the strategic selection of a San Francisco workplace would permit the U.Okay. to “faucet into the wealth of tech expertise accessible within the Bay Space,” together with partaking with one of many world’s largest synthetic intelligence (AI) labs situated between London and San Francisco.

Moreover, it stated this transfer will assist it “cement” relationships with key gamers within the U.S. to push for world AI security “for the general public curiosity.”

Already, the London department of the AI Security Institute has a workforce of 30 that’s on a trajectory to scale and purchase extra experience, notably in danger evaluation for frontier AI fashions.

Donelan stated the growth represents the U.Okay.’s management and imaginative and prescient for AI security in motion: 

“It’s a pivotal second within the UK’s capability to review each the dangers and potential of AI from a worldwide lens, strengthening our partnership with the US and paving the way in which for different international locations to faucet into our experience as we proceed to guide the world on AI security.”

This follows the U.Okay.’s landmark AI Safety Summit, which befell in London in November 2023. The summit was the primary of its sort to focus on AI security on a worldwide scale.

Associated: Microsoft faces multibillion-dollar fine in EU over Bing AI

The occasion boasted leaders from world wide, including from the U.S. and China, with main voices within the AI area, together with Microsoft president Brad Smith, OpenAI CEO Sam Altman, Google DeepMind CEO Demis Hassabiss and Elon Musk. 

On this newest announcement, the U.Okay. additionally stated it’s releasing a number of the institute’s latest outcomes from security testing it carried out on 5 publicly accessible superior AI fashions.

It anonymized the fashions and stated the outcomes present a “snapshot” of the capabilities of the fashions as a substitute of designating them as “protected” or “unsafe”.

A part of the findings included that a number of fashions may full cybersecurity challenges, although others struggled with extra superior ones. A number of fashions have been discovered to have PhD-level data of chemistry and biology.

It concluded that each one examined fashions have been “extremely weak” to fundamental jailbreaks and that the examined fashions weren’t in a position to full extra “advanced, time-consuming duties” with out human supervision.

Ian Hogearth, the chair of the institute, stated these assessments would assist contribute to an empirical evaluation of mannequin capabilities.

“AI security continues to be a really younger and rising subject. These outcomes characterize solely a small portion of the analysis strategy AISI is growing.”

Journal:‘Sic AIs on each other’ to prevent AI apocalypse: David Brin, sci-fi author