UK AI alignment project gets OpenAI and Microsoft boost

OpenAI and Microsoft are the latest companies to back the UK’s AI Security Institute (AISI).
The two firms have pledged support for the Alignment Project, an international effort to work towards advanced AI systems that are safe, secure and act as intended.
During the AI Impact Summit in India, the UK government announced that £27m is now available for AI alignment research, backing some 60 projects.
The project combines grant funding for research, access to compute infrastructure and ongoing academic mentorship from AISI’s own leading scientists in the field to drive progress in alignment research.
Without continued progress in this area, increasingly powerful AI models could act in ways that are difficult to anticipate or control, which could pose challenges for global safety and governance.
“AI offers us huge opportunities, but we will always be clear-eyed on the need to ensure safety is baked into it from the outset,” said UK deputy prime minister David Lammy.
“We’ve built strong safety foundations which have put us in a position where we can start to realise the benefits of this technology,” he added. “The support of OpenAI and Microsoft will be invaluable in continuing to progress this effort.”
The government sees AI alignment as the effort of steering advanced AI systems to act as intended, without unintentional or harmful behaviours. It involves developing methods that prevent unsafe behaviours as AI systems become more capable.
The Department for Science, Innovation and Technology (DSIT) sees progress on alignment as something that will boost confidence and trust in AI, ultimately supporting the adoption of systems which are increasing productivity.
UK AI minister Kanishka Narayan said: “We can only unlock the full power of AI if people trust it – that’s the mission driving all of us. Trust is one of the biggest barriers to AI adoption, and alignment research tackles this head-on.”
Urgent challenge
With the rise of AI systems that can perform increasingly complex tasks, there is a growing global consensus that AI alignment is one of the most urgent technical challenges of our era.
Mia Glaese, vice-president of research at OpenAI, said: “As AI systems become more capable and more autonomous, alignment has to keep pace. The hardest problems won’t be solved by any one organisation working in isolation – we need independent teams testing different assumptions and approaches.
“Our support for the UK AI Security Institute’s Alignment Project complements our internal alignment work, and helps strengthen a broader research ecosystem focused on keeping advanced systems reliable and controllable as they’re deployed in more open-ended settings,” she added.
Besides OpenAI and Microsoft, AISI’s Alignment Project is supported by an international coalition including the Canadian Institute for Advanced Research, the Australian Department of Industry, Science and Resources’ AI Safety Institute, Schmidt Sciences, Amazon Web Services, Anthropic, the AI Safety Tactical Opportunities Fund, Halcyon Futures, the Safe AI Fund, Sympatico Ventures, Renaissance Philanthropy, UK Research and Innovation, and the Advanced Research and Invention Agency. It is led by an expert advisory board, including Yoshua Bengio, Zico Kolter, Shafi Goldwasser and Andrea Lincoln.
DSIT said the Alignment Project builds on AISI’s international leadership, ensuring leading researchers from the UK and collaborating partners can shape the direction of the field and drive progress on safe AI that behaves predictably.



