UK opens office in San Francisco to address AI risks

AI News


Ahead of the AI ​​Safety Summit, which kicks off in Seoul, South Korea later this week, co-host the UK is expanding its own efforts in the field. The AI ​​Safety Institute, a UK institution founded in November 2023 with the ambitious goal of assessing and addressing risks in AI platforms, has announced the opening of its second location in San Francisco. did.

The idea is to move closer to what is now the epicenter of AI development, particularly the Bay Area, home to OpenAI, Anthropic, Google, and Meta, which are building foundational AI technologies.

Fundamental models are the building blocks of generative AI services and other applications, and it is interesting to note that despite the UK signing an MOU with the US to collaborate on AI safety efforts, the UK remains It's choosing to invest in building a commitment to safety. He will be stationed directly in the United States to address this issue.

“By putting people on the ground in San Francisco, we will be able to access the headquarters of a lot of AI companies,” Michelle Donnellan, the UK Secretary of State for Science, Innovation and Technology, said in an interview with TechCrunch. . “Many of them are based here in the UK, but we think it would be very beneficial to have a base there as well and have access to an additional talent pool and be able to work more collaboratively and collaboratively. “With the United States.” ”

Part of the reason is that for the UK, being closer to its epicenter not only helps us understand what is being built, but also allows the UK to gain more visibility into these companies. It's for a reason. This is important considering how AI and technology as a whole is viewed by other people. The UK represents a huge opportunity for economic growth and investment.

And given the recent drama surrounding OpenAI's Superalignment team, it feels like an especially timely time to establish a presence there.

Launched in November 2023, the AI ​​Safety Institute is a relatively small operation at this point. The organization currently employs just 32 people, and given the billions of dollars invested in companies that build AI models, the very AI technology This is David the Goliath. It goes out the door and into the hands of paying users.

One of the AI ​​Safety Institute's most notable developments was the release earlier this month of Inspect, the first set of tools for testing the safety of basic AI models.

Donelan today called the release a “Phase 1” effort. Not only has benchmarking models proven difficult in the past, engagement is currently largely opt-in and an inconsistent arrangement. As one senior UK regulatory source pointed out, companies currently have no legal obligation to scrutinize their models. Also, not all companies are willing to vet models before release. This means that if a risk is identified, the horse may have already stalled.

Donnellan said the AI ​​Safety Institute is still developing the best way to engage with AI companies to evaluate them. “Our evaluation process itself is a new science,” she said. “So we continue to develop and refine our processes with each evaluation.”

Mr Donnellan said one of his objectives in Seoul was to present the inspection to the regulators convened at the summit, with the aim of having them adopt it as well.

“We now have a rating system. Phase 2 also involves making AI safe across society,” she said.

Mr Donnellan believes that in the long term the UK will develop further AI legislation, but repeating what Chancellor Rishi Sunak has said on the subject, he believes that until there is a better understanding of the extent of AI risks, will resist.

“We don't believe in legislating before we properly know and fully understand,” she said, noting that a recent international AI safety report published by the institute He pointed out that the main focus was on trying to get a comprehensive picture of previous research. We believe that major gaps are missing and that more research needs to be encouraged and encouraged around the world.

“Also, it takes about a year to legislate in the UK. And if we had started law instead when we were just starting out… [organizing] AI Safety Summit [held in November last year], we're still legislating right now and there won't really be anything to show for it. ”

“From day one of the Institute’s establishment, we have taken an international approach to AI safety, sharing research, collaborating with other countries to test models, and recognizing the importance of anticipating risks in frontier AI. “We've made that clear,” said Ian Hogarth, director of the institute. AI Safety Research Institute. “Today is a pivotal moment as we take this plan further. We are proud to add to the incredible expertise that our staff in London have brought from the start, and to expand in a region rich with technology talent.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *