0

UK opens office in San Francisco to tackle AI risk | TechCrunch

Ahead of AI Security Summit to begin in Seoul, South Korea later this week, its co-host the United Kingdom is expanding its efforts in the region. The AI ​​Safety Institute – a UK body set up in November 2023 with the ambitious goal of assessing and addressing risks in AI platforms – said it will open a second location in San Francisco.

The idea is to get closer to the epicenter of AI development currently, where the Bay Area is home to OpenAI, Anthropic, Google, and Meta, among others building foundational AI technology.

Foundational models are the building blocks of generic AI services and other applications, and it is interesting that although the UK has signed a Memorandum of Understanding with the US for both countries to collaborate on AI safety initiatives, the UK is still in the process of building one. Choosing to invest. Its direct presence in the US to deal with this issue.

“By putting people on the ground in San Francisco, they will get access to the headquarters of many of these AI companies,” Michelle Donnellan, the UK’s Secretary of State for Science, Innovation and Technology, said in an interview with TechCrunch. “Many of them have bases in the United Kingdom, but we think it would be very useful to have a base there too, and have access to an additional pool of talent, and be able to work even more collaboratively and hand-in-hand in the United States.” With America.”

One reason for this is that, for the UK, being close to that epicenter is useful not only for understanding what is being built, but because it gives the UK greater visibility with these companies – crucially, it Given that AI and technology are viewed holistically there is a huge opportunity for UK economic growth and investment.

And given the latest play around in OpenAI SuperAlignment TeamThis feels like a particularly timely moment to establish a presence there.

The AI ​​Safety Institute, launched in November 2023, is currently a relatively modest affair. There are only 32 people working at the organization today, which is a real David to the Goliath of AI technology when you consider the billions of dollars of investment that is riding on companies building AI models, and thus getting their technologies. They have their own economic motivations for doing so. Out the door and into the hands of paying users.

One of the most notable developments from the AI ​​Safety Institute, released earlier this month inspectionThis is the first set of tools for testing the security of basic AI models.

Donnellan today referred to that release as a “phase one” effort. Not only is it proved challenging Benchmark models to date, but for now engagement is very much an opt-in and inconsistent arrangement. As a senior source at the UK regulator explained, there is currently no legal obligation for companies to have their models tested; And not every company is willing to test models before release. This may mean that, in cases where a risk can be identified, the horse may have already succumbed.

Donnellan said the AI ​​Safety Institute is still developing how to engage with AI companies and evaluate them. “Our evaluation process is an emerging science in itself,” he said. “So with each assessment, we will evolve the process, and refine it even more.”

Donelan said one goal in Seoul will be to present the oversight to regulators at the summit, with the goal of encouraging them to adopt it as well.

“Now we have an evaluation system. Phase two should also be about making AI safe throughout society,” she said.

In the long term, Donnellan believes the UK will create more AI legislation, although, echoing what Prime Minister Rishi Sunak has said on the subject, he will oppose doing so unless it is Does not have a good understanding of the scope of risks.

“We don’t believe in making laws until we have a proper grip and a complete understanding,” he said, adding that the international AI safety report recently published by the institute mainly reflects the research to date. Focuses on trying to get a comprehensive picture. That major gaps are missing and we need to encourage more research globally.

“And it also takes about a year to legislate in the United Kingdom. And if we had instead started making laws when we started [organizing] AI Security Summit [held in November last year]“We would still be making laws and we wouldn’t really have anything to show for it.”

Ian Hogarth, President of the Institute, said, “From day one of the Institute, we have been taking an international approach to AI safety, sharing research and working together with other countries to test models and estimate the risks of frontier AI. “We have been clear on the importance of doing this.” AI Security Institute. “Today is an important moment that allows us to advance this agenda, and we are proud to grow our operations in a region teeming with tech talent, adding to the incredible expertise that our employees in London have brought from the start “

uk-opens-office-in-san-francisco-to-tackle-ai-risk-techcrunch