Help Net Security

Package hallucination: LLMs may deliver malicious code to careless devs

LLMs’ tendency to “hallucinate” code packages that don’t exist could become the basis for a new type of supply chain attack dubbed “slopsquatting” (courtesy of Seth Larson, Security Developer-in-Residence at the Python Software Foundation). A known occurrence Many software developers nowadays use large language models (LLMs) to help with their programming. And, unfortunately, LLMs’ known tendency to spit out fabrications and confidently present them as facts when asked questions on various topics extends to coding. …
favicon
bsky.app
Hacker & Security News on Bluesky @hacker.at.thenote.app
favicon
helpnetsecurity.com
helpnetsecurity.com
Create attached notes ...