Powered by OpenAIRE graph
Found an issue? Give us feedback
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/ ZENODOarrow_drop_down
image/svg+xml art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos Open Access logo, converted into svg, designed by PLoS. This version with transparent background. http://commons.wikimedia.org/wiki/File:Open_Access_logo_PLoS_white.svg art designer at PLoS, modified by Wikipedia users Nina, Beao, JakobVoss, and AnonMoos http://www.plos.org/
ZENODO
Audiovisual
Data sources: ZENODO
addClaim

Ep. 593: Manufacturing Consent: How AI Scales Digital Deception

Authors: Rosehill, Daniel; Gemini 3.1 (Flash); Chatterbox TTS;

Ep. 593: Manufacturing Consent: How AI Scales Digital Deception

Abstract

Episode summary: Are you talking to people or a void of algorithms? In this episode, Herman Poppleberry and Corn dive deep into the "Dead Internet Theory" and the evolving landscape of digital influence operations. They break down how state actors and political parties use large language models to overcome the traditional trade-off between quantity and quality, creating thousands of unique, credible personas at the touch of a button. From "narrative laundering" to the black market for "aged accounts," learn how modern psychological operations are manufacturing a fake majority and what it means for the future of online discourse. Show Notes ### The Illusion of Connection: Navigating the Dead Internet In the latest episode, hosts Herman Poppleberry and Corn tackle a chilling modern phenomenon: the "Dead Internet Theory." The discussion begins with a relatable observation—the feeling that social media has become a hollow echo chamber where repetitive opinions drown out genuine human interaction. This isn't just a cynical outlook; it is a documented trend where a significant portion of internet traffic and content is no longer generated by humans, but by sophisticated botnets and AI-driven influence campaigns. The conversation was sparked by a query from their housemate, Daniel, who noticed a persistent hum of bot activity on X (formerly Twitter) while trying to follow real-time news. Daniel's central question serves as the anchor for the episode: How can political entities, such as the Likud party in Israel or the Kremlin, scale their influence without sacrificing the credibility of their fake accounts? ### From Botnets to Sock Puppets Herman clarifies the terminology that often gets muddled in public discourse. Historically, a "botnet" was a blunt instrument—a swarm of compromised computers used for basic, repetitive tasks like spamming a hashtag. These were easy to spot: accounts with no profile pictures and gibberish handles posting the exact same sentence simultaneously. However, the "sock puppet" represents a more dangerous evolution. A sock puppet is a digital persona designed for deception, complete with a bio, interests, and a posting history. Herman explains that the historical barrier to these operations was scalability. In the past, creating a convincing fake persona required a human operator and years of "grooming" the account. Today, that barrier has vanished. ### The AI Force Multiplier The turning point in digital deception is the integration of Large Language Models (LLMs) and agentic AI. Herman points out that while defense contractors in 2011 were developing software that allowed one person to manage ten accounts, the technology of 2026 allows a single operator to oversee thousands. By feeding an AI a specific persona—such as a "skeptical nurse from Ohio"—operators can generate thousands of unique, contextually relevant posts in seconds. These bots don't just post; they interact, like each other's content, and create a simulated "grassroots" movement. This process, known as "astroturfing," exploits the human psychological tendency toward social proof. If a user sees five hundred different accounts supporting a specific narrative, they are far more likely to perceive it as a mainstream opinion, even if those accounts are all controlled by a single server. ### Narrative Laundering and Cyborg Accounts One of the most provocative concepts discussed is "narrative laundering." Herman describes this as a multi-stage process where misinformation begins on a low-credibility site, is amplified by AI sock puppets, is shared by misled real users, and eventually gains enough "social proof" to be cited by legitimate news outlets. The hosts also explore the rise of "cyborg accounts"—a hybrid approach where a human strategist uses automated tools to boost their reach. This method was notably seen in the "Stoic" campaign, an Israeli firm caught using ChatGPT to generate comments and fake news sites to influence U.S. lawmakers. By blending human strategy with machine execution, these operations become nearly indistinguishable from legitimate political activism. ### The Arms Race of Detection If these networks are so prevalent, why don't platforms simply shut them down? Corn and Herman explain that we are currently in an "arms race" between manipulators and platform security. To bypass detection, botnets now use "jitter"—randomizing the timing of posts to avoid looking mechanical. They use LLMs to ensure no two posts are identical, and they utilize a massive black market for "aged accounts." These aged accounts are perhaps the most cynical tool in the kit. Operators purchase accounts created years ago that have a dormant history of "normal" human activity—posts about pets or sports. By "wearing the skin" of these old accounts, influence operations can bypass the filters that typically flag new accounts, giving their propaganda an unearned sense of longevity and history. ### Conclusion: A Shift in Reality The episode concludes with a sobering takeaway: modern influence operations are no longer just technical challenges; they are psychological ones. The goal isn't necessarily to change a person's mind with a single fact, but to shift their perception of what the majority believes. As Herman notes, when every major government adopts these "computational propaganda" tactics, the internet stops being a town square and starts being a manufactured reality. For listeners like Daniel, the challenge is no longer just finding the news, but discerning which "people" in the digital void are actually there. Listen online: https://myweirdprompts.com/episode/ai-influence-operations-botnets

Powered by OpenAIRE graph
Found an issue? Give us feedback