In a recent development reported by Sarah Ellison and Amy Gardner in The Washington Post, secretaries of state from five U.S. states are urging Elon Musk to correct false information being spread by Grok, the AI chatbot on Musk’s social media platform, X. Grok falsely claimed that Vice President Kamala Harris had missed the ballot deadline in nine states for the 2024 presidential election. In reality, the deadlines in all of these states are still open. This unfiltered misinformation was spread to millions of individuals, either directly via X or, in most cases, through dissementation through other social media channels. The potential for changing voter perceptions is obvious: misinformation like this would be catnip to anyone attempting to preemptorily declare that the 2024 election was shady in some way.
Elon Musk’s insouciance in response to this misinformation being pointed out to him suggests that it may not be a simple oversight. Most of the secretaries of state in Red states that were mentioned in the misinformation wouldn’t even bother to comment on the falsehoods being spread about their state. The false information appears to be a feature rather than a bug, indicating a deliberate strategy to influence public perception through AI-generated narratives. This approach raises ethical questions about the role of tech moguls in shaping the stories we consume and believe.
My current book project delves into Jerusalem’s history, exploring how its ancient stories and myths have shaped modern views and conflicts in this iconic city. I find this a fascinating topic, because political mythmaking is so univeral a human impulse, shared across vast expanses of time and cultures. The Grok incident takes us into new though still strangely familiar territory. It represents the industrial-scale production of ideological narratives, finely tuned—in this case—to the individual user’s likely voting patterns. This new frontier is not just about preserving or reinterpreting old stories; it’s about creating new, targeted narratives that can sway opinions and potentially alter election outcomes.
As a nonfiction author, I aim to responsibly persuade readers to see things from a new perspective. My ambition, like those of most nonfiction authors, is to influence the consensus view on various topics. This, in a way, is trying to change the response from AI systems like ChatGPT, Bard, and Claude the “hard” way: by trying to influence public opinion, which is the training data for these systems, whose output evolves over time. This organic change, if ultimately successful, would result in a genuine shift in understanding, driven by informed discourse and critical thinking. In contrast, Musk’s actions with X probably reveal an impulse to take a radical shortcut for narrative control.
In a previous book, The Industrialization of Intelligence: Mind and Machine in the Modern Age, I explored the concept of “rationalization” in productive processes, innovations that either streamline how existing products are made or invent new products that disrupt the market for incumbent products. I argued that rationalization is never a neutral event; there is no objectively best way to streamline an industrial process or create a new product or service. When innovation occurs, it is always done to serve the self-interest of the rationalizer. In this case, that rationalizer is Elon Musk.
Because they fascinate us, we tend to give broligarchs like Musk the benefit of the doubt. We assume—or hope—that they are rationalizing the industrial world in pursuit of money, fame, or perhaps to make the world a better place. But when we look at Musk’s handling of X and Grok, none of these motives seem applicable. Instead, we see a calculated effort to control the narrative landscape, influencing political outcomes and public opinion on a massive scale. People have been trying to do this for thousands of years, and Musk is in no ways unique in what he appears to want to do. What is different is the technological tools he has at his disposal.