This text summarizes a paper presented at the NDSS symposium focusing on machine learning security. The paper introduces a new paradigm for privacy-preserving machine learning (PPML) specifically designed for quantized models to improve efficiency. Model quantization is often used to optimize machine learning models for speed and resource usage. Traditional PPML approaches struggle with quantized models due to their complex internal structures. The paper's key insight is the use of look-up tables to simplify the evaluation of quantized operators. The authors view model inference as a series of quantized operators, each handled via a look-up table. They developed an efficient private look-up table evaluation protocol that minimizes online communication costs. This protocol achieves remarkably fast performance on a single CPU core. The resulting PPML framework demonstrates substantial speedups compared to existing solutions. The experiments show significant improvements in online performance for CNN models. Large language models also benefit from the proposed improvements. The NDSS symposium is a conference focused on network and distributed system security. It highlights practical system design and implementation and promotes the adoption of advanced security technologies. The text also acknowledges the NDSS for publishing the content.
securityboulevard.com
securityboulevard.com
bsky.app
Hacker & Security News on Bluesky @hacker.at.thenote.app
