Google Cloud's security teams have made significant progress in exploring the risks organizations face from AI, including developing the AI Red Team and sharing the Secure AI Framework (SAIF) efforts. The Google Cloud Vulnerability Research team discovered and remediated previously-unknown vulnerabilities in Vertex AI, and also researched similar vulnerabilities in another large cloud provider. It's crucial for AI developers to normalize sharing AI security research to enhance security standards globally. The industry should make it easier to find and fix vulnerabilities, not harder. The Coalition for Secure AI and the open-source Secure AI Framework (SAIF) have important roles to play in safeguarding the technology that supports AI advancements. Google Cloud advocates for consistent control frameworks that can support AI risk mitigation and scale protections across platforms and tools. The company also supports the use of Confidential Computing to solve customer problems when combined with AI. In addition, Google Cloud's podcasts discuss securing inherited cloud projects, using LLMs to analyze Windows binaries, and how threat actors bypass multi-factor authentication.
cloud.google.com
cloud.google.com
Create attached notes ...
