Help Net Security

LLMs change their answers based on who’s asking

AI chatbots may deliver unequal answers depending on who is asking the question. A new study from the MIT Center for Constructive Communication finds that LLMs provide less accurate information, increase refusal rates, and sometimes adopt a different tone when users appear less educated, less fluent in English, or from particular countries. Breakdown of performance on TruthfulQA between ‘Adversarial’ and ‘Non-Adversarial’ questions. (Source: MIT) The team evaluated GPT-4, Claude 3 Opus, and Llama 3-8B using …
favicon
helpnetsecurity.com
helpnetsecurity.com
favicon
bsky.app
Hacker & Security News on Bluesky @hacker.at.thenote.app