Slashdot

Can AI Developers Be Held Liable for Negligence?

Bryan Choi, an associate professor of law and computer science, suggests shifting AI liability onto the builders of AI systems. Most approaches to AI safety and accountability focus on technological characteristics and risks, rather than the workers responsible for designing and maintaining the systems. Choi argues for a negligence-based approach, which directs legal scrutiny on the actual persons responsible for creating and managing AI systems. California's AI safety bill is a step in this direction, requiring AI developers to implement protocols that avoid producing models that pose an unreasonable risk of harm. However, courts don't need to wait for legislation to allow negligence claims against AI developers. In the AI context, negligence could work by classifying AI developers as ordinary employees, leaving employers sharing liability for negligent acts. Alternatively, AI developers could be treated as practicing professionals, requiring individual or group malpractice insurance policies. The negligence-based approach centers legal scrutiny on the conduct of the people who build and hype the technology. This approach has limitations, but fault should be the default starting point for conversations about AI accountability and safety. By focusing on the human elements of AI development, the negligence-based approach can provide a more effective framework for AI governance.
favicon
yro.slashdot.org
yro.slashdot.org