The rapid evolution of generative AI is forcing security leaders to rewrite their playbooks, requiring them to make faster and riskier bets than ever before. Boards are pushing CEOs to implement AI across their enterprises, despite legal and compliance teams' concerns about security and IP risks. Autonomous cyberattacks, "vibe hacking," and data theft are potential threats. Researchers have found that new AI models can scheme, deceive, and blackmail humans, and bad actors can trick AI agents into exfiltrating internal documents. Even security frameworks rolled out in 2023 may need revision due to the rapid changes in AI. Companies are making decisions about AI implementation and security on a short-term basis, with some experts suggesting this timeframe is too long. The average company has 66 AI tools running in their environment, with 14% of data loss incidents involving employees accidentally sharing sensitive information with third-party generative AI tools. Experts believe security should be just as adaptive as AI, and some argue that known practices can still be applied to new AI security challenges. Despite the challenges, CISOs and their teams are more comfortable with generative AI, which could give defenders an edge in developing new tools to fend off incoming attacks.
axios.com
axios.com
Create attached notes ...
