Tech Secretary unveils £8.5m research funding aimed at breaking new ground in AI safety testing

  • New subsidies offered to researchers to push boundaries AI security research
  • funding program launched as the UK government wants to explore new methods to increase the safe and reliable deployment of AI Funded research
  • The grants aim to understand how society can adapt to the transformations generated by the advent of the economy AI.

At the AI At the Seoul Summit (22 May), co-hosted by the UK and the Republic of Korea, Technology Secretary Michelle Donelan announced that the UK Government will offer grants to researchers to study how to protect society from AI counter risks such as deepfakes and cyber attacks, and help realize their benefits such as increased productivity.

The most promising proposals will be developed into longer-term projects and may receive further funding.

The program (published on www.aisi.gov.uk) will be led within the UK Government’s pioneering organisation. AI Security Institute by Shahar Avin, an AI security researcher to be seconded to the British Institute and Christopher Summerfield, UK AI Research director of the Safety Institute. The research program will be carried out in collaboration with UK Research and Innovation and the Alan Turing Institute and the United Kingdom AI Safety Institute strives to collaborate with others AI Safety institutes internationally. Applicants must be based in the UK, but are encouraged to collaborate with other researchers from around the world.

The pioneering work of the British government AI Safety Institute is a world leader in testing and evaluation AI models, promoting the cause of safe and reliable AI. Earlier this week the AI The Safety Institute has released its first set of public test results AI models. It also announced a new office in the US and a partnership with the Canadian AI Safety Institute – building on a landmark agreement with the US earlier this year.

The new grant program is intended to expand the Institute’s mission to include the emerging field of “systemic.” AI safety’, which aims to understand how the consequences of AI at a societal level and study how our institutions, systems and infrastructure can adapt to the transformations this technology has brought about.

Examples of proposals within scope include ideas on how to counter the spread of fake images and disinformation by intervening on the platforms that spread them, rather than on the AI models that generate them.

Technology Secretary Michelle Donelan said:

When Britain launched the world’s first AI Safety Institute, we committed ourselves last year to achieving an ambitious but urgent mission from which we can reap the positive benefits AI by furthering the cause of AI safety.

With evaluation systems for AI models that are available now, phase 2 of my plan to increase the opportunities of AI must be about making AI safe throughout society.

This is exactly what we are making possible with this funding, allowing our institute to work with academia and industry to ensure we remain proactive in developing new approaches that can help us ensure AI remains a transformative force for good.

I am acutely aware that we can only achieve this enormous challenge by tapping into a broad and diverse pool of talent and disciplines, and by advancing new approaches that push the boundaries of existing knowledge and methodologies.

The Honorable François-Philippe Champagne, Minister of Innovation, Science and Industry, said:

Canada continues to play a leading role in global governance and responsible use of AI.

From our role as advocate for the establishment of the Global Partnership AI (GPAI), to pioneering a national AI strategy, to be among the first to propose a legislative framework for regulation AIwe will continue to work with the global community to shape the international discourse and build trust around this transformational technology.

The AISI The Systemic Safety program aims to attract proposals from a wide range of researchers from both the public and private sectors, who will work closely with the UK Government to ensure their ideas have maximum impact.

It runs in parallel with the Institute’s evaluation and testing AI Models that the Institute will continue to work with AI labs to set and help guide development standards AI to have a positive impact.

Christopher Summerfield, United Kingdom AI Safety Institute Research Director said:

This new subsidy program is an important step in achieving this AI is used safely in society.

We need to think carefully about how we can adapt our infrastructure and systems to a new world in which… AI is embedded in everything we do. This program is designed to generate a huge amount of ideas on how to tackle this problem, and to ensure that great ideas can be put into practice.

The AI The Seoul Summit builds on the inaugural summit AI Safety Summit, hosted by the UK at Bletchley Park last November, is one of the largest ever gatherings of countries, businesses and civil society on AI.

UKRI Chief Executive, Professor Dame Ottoline Leyser said:

The AI The Safety Institute’s work is essential to understanding AI risks and creating solutions to increase the social and economic value of AI for all citizens. UKRI is pleased to be working closely with the Institute on this new program to ensure that UK institutions, systems and infrastructures can safely benefit from AI.

This program taps into the UK’s leading sector AI expertise, and UKRI‘S AI investment portfolio that includes skills, research, infrastructure and innovation, to ensure effective governance of AI implementation in society and the economy.

The program will bring security research to the heart of government and support the innovation-boosting regulations that will shape Britain’s digital future.

Professor Helen Margetts, Director of Public Policy at the Alan Turing Institute, said:

We are delighted to be part of this important initiative, which we hope will have a significant impact on Britain’s ability to tackle the threats of AI technology and keep people safe. Rapidly advancing technology is bringing profound changes to the information environment and shaping our social, economic and democratic interactions.

Therefore financing AI safety is critical – to ensure we are all protected from the potential risks of abuse while maximizing its benefits AI for a positive impact on society.

Notes for editors

AI researcher Shahar Avin will lead the fellowship program from Great Britain AI Safety Institute, and bring a wealth of knowledge and experience to ensure proposals reach their full potential in protecting the public from risk AI while reaping its benefits. He is a senior researcher at the Center for the Study of Existential Risk (CSER) and previously worked at Google.

The program is run in partnership with UK Research and Innovation and the Alan Turing Institute.

You can read more about the recent announcement about the opening of the Institute’s San Francisco office, the AI test results of models, and VK AISI‘s partnership with the US and Canada AI Safety institutes.

Leave a Reply

Your email address will not be published. Required fields are marked *