AI & ML News

NIST Releases an Open-Source Platform for AI Safety Testing

The National Institute of Standards and Technology (NIST) has introduced Dioptra, an open-source software for testing machine learning (ML) model resilience against various attacks. Dioptra's new version includes features like a web-based interface, user authentication, and provenance tracking for reproducible and verifiable results. NIST research categorizes ML attacks into evasion, poisoning, and oracle attacks, each with unique strategies to undermine model integrity. The platform allows users to assess how these attacks impact model performance and test defenses such as data sanitization and robust training methods. Dioptra's modular design supports experimentation with different models, datasets, attack tactics, and defenses, making it accessible to developers, users, testers, auditors, and researchers. It also offers extensibility and interoperability with Python plugins to enhance functionality. Experiment histories are tracked for traceable and reproducible testing, leading to insights for better model development and defenses. Alongside Dioptra, NIST released three guidance documents on managing AI risks, secure software development for generative AI, and a plan for global AI standards cooperation. These documents provide comprehensive recommendations and practices to address the unique risks and secure development of AI technologies.
yro.slashdot.org
yro.slashdot.org
Create attached notes ...