The UK government, in collaboration with the AI Safety Institute (AISI), has announced a new grant program aimed at advancing research in systemic AI safety. Initially backed by up to £8.5 million, the program will fund researchers to explore how to protect society from AI risks such as deepfakes and cyber-attacks, while also harnessing AI's benefits. Shahar Avin from the Centre for the Study of Existential Risk (CSER) at Cambridge University will lead the initiative. This marks Phase 2 of the AI Safety Institute's plan to promote safe AI across society, following the initial evaluations of AI models. The program is expected to address under-explored areas in AI safety and make systems more resilient, particularly in safeguarding critical infrastructure. The fast grants program will complement existing AI Safety Funds and builds on concepts described three years ago in 'Unsolved Problems in ML Safety.'
Awesome to see this program being prioritized and Shahar is exactly the person you want leading it—an innovative thinker on AI safety and governance and also how to improve research funding. Excited to see what comes from this!! https://t.co/2nCzhkV7SN
I’m really excited to see that the UK’s AI Safety Institute is including the study of societal level safety risks from AI into its targets of study. https://t.co/l8maJwjePm
UK government announces £8.5m in grants for AI safety research https://t.co/eyhZ3jF05a
New fast grants programme focusing on systemic AI safety, i.e. safeguarding the societal systems and critical infrastructure into which AI is being deployed. Great stuff, AISI acing it lately 🔥 https://t.co/EStIS1pSS2 https://t.co/hTEBklHENe
Up to £8.5 million in grants will be offered by the UK government for researchers to study how to protect society from AI risks such as deepfakes and cyber-attacks, as well as harness AI’s benefits. https://t.co/voPCWeJgmv #Tech | #News | #AI
The @AISafetyInst is launching a programme of fast grants to jumpstart research. Focus is on making systems more resilient - for example those that could help that protect critical infrastructure from AI-enabled cyberattacks. More info here: https://t.co/eeHSDXWc6J
Excited to see the announcement today of the UK’s new Systemic AI Safety fund, which will be a great complement to our AI Safety Fund. Very much look forward to all the important research it will support! https://t.co/iCi0hPWBiz
Exciting news! CSER's Shahar Avin will lead a new grant programme from the UK Government and @AISafetyInst on systemic AI safety https://t.co/RbL7R5Cv3J
This new grants programme, led by the excellent Shahar Avin (also of AI:FAR) is a really excellent development in my view. An important, challenging, and under-addressed area. Please share! https://t.co/OEemjiYfrF
When I launched the @AISafetyInst last year, I committed to driving forward scientific research on AI safety. Having put in place evaluations of the models, this is Phase 2 of the plan, to safely harness the opportunities of AI by making AI safe across the whole of society. https://t.co/KsNMwQFFBY
There are now grants for systemic safety! Three years ago we described "Systemic Safety" in Unsolved Problems in ML Safety https://t.co/wJu13ZPZuo It's great that AI safety funding is becoming more sociotechnical and addressing systemic impacts. https://t.co/T3qVLXGIrx
We are announcing new grants for research into systemic AI safety. Initially backed by up to £8.5 million, this program will fund researchers to advance the science underpinning AI safety. Read more: https://t.co/QHOLUp3QGR https://t.co/jnAdLJ4eAg