Yoshua Bengio, a renowned AI researcher, is joining a UK government-funded project called Safeguarded AI to embed safety mechanisms into AI systems. The project aims to build an AI system that can check the safety of other AI systems deployed in critical areas. Safeguarded AI's goal is to provide quantitative guarantees, such as risk scores, about the real-world effects of AI systems. The project will combine scientific world models and mathematical proofs to verify the safety of AI models. Bengio wants to ensure that future AI systems do not cause serious harm, as current techniques are insufficient to guarantee their safe behavior.
The Safeguarded AI project will create a "gatekeeper" AI to understand and reduce the safety risks of other AI agents. ARIA is offering funding to organizations in high-risk sectors to develop applications that could benefit from AI safety mechanisms. ARIA is also looking to establish a nonprofit organization to oversee the development of Safeguarded AI's safety mechanisms. The Safeguarded AI project is part of the UK's efforts to position itself as a pioneer in AI safety. Bengio hopes the project will promote international collaboration on addressing the risks of advanced AI systems.
www.technologyreview.com
www.technologyreview.com
Create attached notes ...