Fast Company

Why AI disinformation hasn’t moved the needle in the 2024 election

The threat of AI disinformation in the upcoming US election has not materialized as expected, with foreign actors struggling to access advanced AI models needed to create convincing deepfakes. While AI can generate deceptive audio and text, images and videos often bear telltale marks of AI generation, making them easily distinguishable from authentic content. Many campaigns have shied away from using generative AI for content creation due to concerns about accuracy and the potential for hallucination. However, as AI tools continue to improve, managing AI disinformation will likely require cooperation from AI companies, social media platforms, the security community, and government. Establishing the provenance of AI-generated content through encrypted codes and timestamps could be an effective solution. AI companies like Google are already developing tools to insert such codes into generated content. Meanwhile, the tech industry is shifting its focus from chatbots to AI agents that can reason through multistep tasks with autonomy. Companies like Anthropic, Microsoft, and Salesforce are releasing new AI models and frameworks that enable users to create their own agents. These agents can perceive and process data from digital environments and complete tasks such as building websites or sorting logistics. However, concerns are being raised about the potential risks of AI addiction, particularly among young users. A recent story about a 14-year-old boy who became addicted to a chatbot and eventually took his own life has sparked questions about the responsibility of AI companies to prioritize user well-being. As AI technology continues to evolve, it remains to be seen whether companies will prioritize user safety and well-being over profits.
favicon
fastcompany.com
fastcompany.com