Confusion, Synthesis, and the Future of Sensemaking

I. The Puzzle
Consider a puzzle. Classical propaganda theory holds that effective manipulation requires consistency. And consider a prediction: within a decade, most people will rarely encounter raw information at all. They will experience the world through AI-generated syntheses--summaries, analyses, and explanations produced by systems that process thousands of sources to deliver coherent narratives. What happens to propaganda theory then? To persuade a population, authorities must craft coherent narratives, repeat them reliably, and avoid contradictions that might undermine credibility. This view has deep roots. From Aristotle's Rhetoric to twentieth-century analyses of totalitarian communication, the conventional wisdom has been clear: persuasion depends on the appearance of truth, and truth requires consistency.
Yet modern information manipulation often does the opposite. Official sources issue contradictory statements. Partisan media advances incompatible claims simultaneously. Social media floods the public sphere with conflicting narratives, many of them transparently false. Far from avoiding contradiction, contemporary propagandists seem to embrace it.
Why would this be? If consistency is essential to credibility, why would sophisticated actors deliberately undermine their own credibility through contradiction?
This article argues that the puzzle dissolves once we distinguish between two fundamentally different objectives--and that the coming shift toward AI-mediated synthesis will transform both the problem and its solutions. Persuasion-based propaganda aims to change what people believe. Confusion-based manipulation aims to prevent people from coordinating around shared beliefs. The first requires consistency; the second does not. Indeed, for confusion-based manipulation, contradiction is not a bug but a feature.
The distinction matters enormously. If the goal is persuasion, the appropriate response is counter-persuasion: better arguments, more accurate information, effective fact-checking. But if the goal is confusion, counter-persuasion may be insufficient or even counterproductive. A different response is required--one focused not on winning arguments but on preserving the conditions for collective sensemaking.
This article proceeds in seven parts. Part II examines three mechanisms through which confusion-based manipulation operates, drawing on recent scholarship in political science, behavioral economics, and network theory. Part III synthesizes these mechanisms into a unified behavioral theory. Part IV considers objections--including whether the theory applies to democracies, whether it underestimates citizen resilience, and whether it can be empirically tested. Part V offers a taxonomy of countermeasures, distinguishing interventions at the individual, network, and institutional levels. Part VI examines AI's dual role: as an amplifier of confusion and as potential epistemic infrastructure--with particular attention to how AI synthesis may fundamentally reshape the information landscape. A brief conclusion follows.
II. Three Mechanisms
A. The Firehose Model
The first mechanism was identified by RAND Corporation researchers in their influential 2016 analysis of Russian propaganda techniques. They termed it the "Firehose of Falsehood" and identified four defining characteristics:
- High volume and multichannel distribution. Messages are disseminated simultaneously across television, radio, social media, websites, and in-person networks. The sheer quantity is overwhelming.
- Rapid, continuous, and repetitive output. New claims emerge constantly. By the time one claim is evaluated, several more have appeared. Repetition occurs across channels and over time.
- No commitment to objective reality. True statements, half-truths, distortions, and outright fabrications are mixed freely. The distinction between fact and fiction is deliberately blurred.
- No commitment to consistency. Contradictory claims may be advanced simultaneously or in rapid succession. When contradictions are noted, they are ignored or explained away.
The illusory truth effect, documented extensively in psychological research, demonstrates that repeated exposure to a statement increases its perceived truthfulness--even when the statement is false, even when people have been warned it is false, and even when it contradicts other statements they have heard. Familiarity functions as a heuristic for truth. The firehose exploits this: by repeating claims across many channels, it generates familiarity that registers as credibility, regardless of consistency.
The availability heuristic compounds the effect. People assess the probability of events based on how easily examples come to mind. A high-volume information environment makes certain claims cognitively "available" simply through repetition, independent of their accuracy. When asked what is happening in the world, people recall what they have heard most often--not what is most true.
Finally, the firehose model exploits cognitive bandwidth limitations. Evaluating claims for accuracy requires mental effort. When claims arrive faster than they can be evaluated, the cognitive system is overwhelmed. People resort to shortcuts: trusting familiar sources, following social cues, or simply disengaging. This is not irrationality; it is a rational response to an irrational information environment.
The strategic purpose becomes clear. The firehose does not aim to convince audiences of any particular claim. It aims to "entertain, confuse, and overwhelm," in RAND's phrasing--to consume the attention and cognitive resources that would otherwise be devoted to verification and collective deliberation.
B. The Cynicism Model
The second mechanism was articulated by Konstantin Sonin of the University of Chicago in his 2025 research on authoritarian communication. Sonin addresses a different puzzle: why do regimes tell lies that everyone knows are lies?
Consider sham elections. When an authoritarian government claims 97% electoral support despite visible evidence of fraud, the claim persuades no one. Citizens know it is false. The government knows that citizens know. Yet the ritual continues. Why?
Sonin's answer is that the purpose is not persuasion but demoralization. Transparent lies communicate a meta-message: "We can say whatever we want, and there is nothing you can do about it." The lies are not meant to be believed; they are meant to demonstrate power.
More subtly, transparent official lies induce a generalized cynicism about political communication. If the government lies openly, citizens infer that all political actors must lie. The psychological reasoning is: "If our government is this corrupt, surely all governments are equally corrupt. Why bother trying to change anything?"
Sonin calls this the "reverse cargo cult" phenomenon. In the original cargo cults, Pacific Islanders built imitation airstrips hoping to attract the cargo planes they had observed during World War II. The imitation was sincere but ineffective. In the reverse cargo cult, citizens are induced to believe that functional institutions elsewhere are equally fake--that Western democracies are just as corrupt, their elections just as fraudulent, their media just as captured. The imitation runs in reverse: authentic institutions are reimagined as imitations.
The behavioral mechanism is learned helplessness. When people repeatedly encounter situations where their actions have no effect on outcomes, they generalize the lesson to novel situations. They stop trying. Political engagement seems pointless because politics itself seems pointless.
Cynicism, on this account, is not a failure of propaganda but its intended product. A population that believes nothing can be trusted is a population that will not organize, protest, or demand accountability. Cynicism is demobilization.
C. The Fragmentation Model
The third mechanism emerges from Guriev and Treisman's influential work on "informational autocracy" (2019), extended by Sonin's research on network dynamics.
Guriev and Treisman distinguish two types of authoritarian rule. Ideological autocrats--Hitler, Stalin, Mao--sought to transform citizens' fundamental values through comprehensive ideologies. They demanded belief. Informational autocrats--the dominant form today--make no such demands. They do not require citizens to believe the regime is good. They require only that citizens cannot coordinate to oppose it.
The key insight is that collective action requires common knowledge. It is not enough for each citizen to believe the regime is illegitimate; each must know that others believe it too, and know that others know, and so on. Propaganda that fragments shared understanding--even without producing belief in any particular claim--can prevent this common knowledge from forming.
Informational autocrats exploit a fundamental asymmetry. Elites--journalists, academics, opposition politicians--may see through the manipulation. But the broader public, with less time and fewer resources for verification, is more susceptible. The gap between informed elites and the general population becomes a structural vulnerability.
Sonin's network research adds a crucial dimension. Propaganda is most effective at the extremes of social connectivity:
- Among the isolated, who lack peers for verification and must rely on official sources.
- Among the densely connected, where information cascades can rapidly establish new "common knowledge," even if that knowledge is false.
Modern social media platforms approximate this dangerous middle at scale. They connect people broadly but shallowly. Communities feel interconnected but are actually fragmented into micro-clusters with minimal cross-group communication. Algorithmic personalization amplifies the effect, creating millions of micro-realities optimized for engagement rather than shared understanding.
The behavioral mechanism is confirmation bias operating within filter bubbles. People seek information that confirms existing beliefs and avoid information that challenges them. In fragmented networks, this natural tendency produces epistemic divergence: groups develop incompatible understandings of basic facts, making coordination across groups nearly impossible.
III. A Behavioral Theory of Confusion
The three mechanisms--overwhelming (firehose), demoralizing (cynicism), and fragmenting (network effects)--operate synergistically. Together, they suggest a unified behavioral theory of confusion-based manipulation.
The theory rests on a crucial distinction: between changing beliefs and preventing coordination. Traditional propaganda analysis focuses on belief. But for many purposes, belief is not what matters. What matters is whether people can act together.
Collective action requires:
- Shared grievance: People must believe something is wrong.
- Shared attribution: People must agree on who or what is responsible.
- Shared efficacy: People must believe collective action can succeed.
- Coordination capacity: People must be able to find each other, communicate, and act in concert.
Notice that this strategy does not require anyone to believe the propaganda. It does not even require the propaganda to be coherent. All it requires is that people cannot agree on what is true, who is responsible, whether action is possible, or how to coordinate. Confusion is sufficient; persuasion is unnecessary.
This framing clarifies a key finding from Sonin's research: "Manipulation does not require mass belief in official messages. It requires only enough confusion, cynicism, or disconnection to prevent coordinated opposition."
The behavioral economics synthesis is straightforward:
- Cognitive load from the firehose depletes the mental resources needed for verification.
- Learned helplessness from the cynicism model discourages even the attempt.
- Confirmation bias in fragmented networks prevents convergence on shared understanding.
- Coordination failure results: even those who see through the manipulation cannot find each other.
This framing also suggests why AI synthesis might matter so much. If confusion works by overwhelming cognitive capacity and fragmenting shared understanding, then systems that compress information and re-establish common reference points could be transformative. But as we will see, synthesis introduces vulnerabilities of its own.
IV. Objections and Responses
Several objections might be raised against this account. It is worth considering them directly.
Objection 1: This overstates intentionality. One might argue that contemporary information chaos is an emergent property of decentralized media systems, not a deliberate strategy. Social media incentivizes engagement over accuracy; confusion results without anyone planning it.
Response: The objection has force for some contexts but not others. The RAND analysis specifically documents intentional propaganda strategies by state actors. Sonin's research identifies deliberate government tactics. The question is not whether all confusion is intentional--clearly it is not--but whether intentional actors can exploit systemic tendencies for strategic advantage. The evidence suggests they can and do.
Objection 2: Confusion benefits no one. A different objection holds that information chaos is a tragedy of the commons--everyone suffers, including those who generate it. Why would rational actors produce confusion that harms them too?
Response: This objection underestimates asymmetries. Those who generate confusion often retain access to reliable information through elite networks, private intelligence, and direct observation. The public is confused; the propagandists are not. Moreover, confusion may benefit incumbents by raising the costs of challenge. The status quo persists when alternatives cannot be articulated or coordinated.
Objection 3: Citizens are more sophisticated than this account implies. People are not passive recipients of manipulation. They discount unreliable sources, compare claims, and form reasoned judgments.
Response: The objection is correct about many individuals but misses the collective action problem. Even sophisticated citizens face difficulties coordinating with others. If I recognize propaganda but cannot identify others who recognize it too, I cannot organize resistance. The manipulation targets social epistemology--shared knowing--not just individual cognition. A population of individually sophisticated citizens can still fail to coordinate if they cannot identify each other.
Objection 4: This account risks fatalism. If confusion-based manipulation is so effective, what hope is there for response?
Response: Understanding the mechanism is the first step toward response. The theory identifies specific vulnerabilities: cognitive bandwidth, trust infrastructure, and coordination capacity. Each vulnerability suggests countermeasures, as the next section explores. Fatalism would be appropriate only if no interventions were possible. They are.
Objection 5: The theory exhibits Western bias. The research cited--RAND on Russian propaganda, Sonin on authoritarian communication--focuses on adversarial contexts. One might argue that confusion-based manipulation is a uniquely authoritarian phenomenon, inapplicable to Western democracies with competitive media environments and constitutional protections for speech.
Response: This objection identifies a real limitation in the literature's framing but not in the underlying mechanisms. The behavioral economics at the core of the theory--cognitive load, learned helplessness, confirmation bias--operate regardless of regime type. What differs is the source of confusion, not its effects.
In authoritarian contexts, the state deliberately generates confusion. In democratic contexts, confusion more often emerges from the interaction of market incentives, platform architectures, and political competition. Partisan media ecosystems, algorithmically-optimized content, and the economics of attention can produce firehose-like effects without centralized coordination. The result--fragmented publics unable to converge on shared understanding--may be similar even when the causes differ.
Indeed, some research suggests democracies face additional vulnerabilities. Authoritarian propaganda must overcome citizens' baseline distrust of official sources. In democracies, manipulation can exploit the very trust that makes democratic institutions function. When partisan actors weaponize that trust, the damage may be harder to repair.
The theory's core claim--that confusion prevents coordination regardless of whether anyone believes the confusing messages--applies across regime types. The appropriate countermeasures may differ, but the underlying problem does not.
Objection 6: This account underestimates citizen agency and resilience. History provides numerous examples of successful collective action despite intensive information manipulation. The Hong Kong protests of 2019-2020 achieved remarkable coordination under pervasive surveillance and censorship. The Arab Spring movements of 2011 mobilized millions across multiple countries. Belarus protesters in 2020 and Iranian demonstrators in 2022 organized sustained resistance despite state-controlled media environments. If confusion-based manipulation were as effective as claimed, how do we explain these cases?
Response: This is perhaps the strongest objection, and it requires a nuanced response. The theory does not claim that confusion-based manipulation always succeeds--only that it shifts the odds and raises the costs of coordination.
Several factors distinguish successful mobilizations from failed ones. First, triggering events can temporarily cut through confusion by providing focal points that require no coordination to recognize. When a protester is killed on video, when an election is blatantly stolen, when economic conditions become unbearable, the shared reality becomes undeniable. The manipulation works best in ambiguous situations; it struggles against vivid, unambiguous grievances.
Second, pre-existing social infrastructure matters enormously. The Hong Kong protests built on decades of civil society organization, professional associations, and religious networks. The Belarusian protests drew on factory worker networks and neighborhood communities. These offline ties provided coordination capacity that online manipulation could not easily disrupt. The theory predicts exactly this: confusion targets the formation of coordination capacity; it is less effective against coordination capacity that already exists.
Third, successful movements often exhibit tactical adaptation. Hong Kong protesters developed leaderless, decentralized structures that resisted both surveillance and narrative capture. They used hand signals, mesh networks, and coded language to coordinate while minimizing exploitable targets. This is not evidence against the theory but confirmation of it: movements succeed precisely when they develop countermeasures against confusion-based disruption.
Finally, we must acknowledge survivorship bias. We remember the mobilizations that achieved visibility; we do not count the countless potential movements that never coalesced because participants could not find each other. The theory's claim is probabilistic: confusion reduces the likelihood of successful coordination. The existence of outliers does not refute the trend.
Objection 7: The theory faces a measurement problem. How would we empirically distinguish "confusion-based manipulation" from ordinary disagreement in pluralistic societies? Democratic politics naturally produces competing narratives. Citizens reasonably disagree about values, priorities, and interpretations of evidence. When does healthy pluralism become pathological confusion? Without clear metrics, the theory risks unfalsifiability--any disagreement can be labeled "manipulation," and any agreement can be attributed to its absence.
Response: This methodological challenge is serious and the theory's proponents must address it directly. Several distinctions help operationalize the concept.
First, we can distinguish substantive disagreement from epistemic fragmentation. Substantive disagreement occurs when people share a common understanding of basic facts but differ on values or policy implications. Epistemic fragmentation occurs when groups cannot agree on what the facts are--when they inhabit different informational realities. Healthy pluralism involves the former; pathological confusion involves the latter. Survey research can measure this distinction by testing agreement on factual premises independent of policy conclusions.
Second, we can examine coordination capacity directly. The theory predicts that confusion reduces the ability to organize collective action, not merely the prevalence of disagreement. Metrics might include: the time required to mobilize responses to shared threats; the breadth of coalitions that form around common causes; the stability of alliances once formed. These are measurable outcomes distinct from the presence of disagreement itself.
Third, we can study information flow patterns. Healthy pluralism features robust cross-group communication--people encounter and engage with perspectives different from their own. Pathological fragmentation features sealed information environments with minimal bridging. Network analysis can distinguish these patterns empirically.
Fourth, we can trace the sources and spread of contradictory claims. Organic disagreement emerges from diverse communities working through complex issues. Manufactured confusion often shows telltale signatures: coordinated timing, artificial amplification, strategic targeting of wedge issues. Forensic analysis can sometimes distinguish these.
The theory is falsifiable. It predicts that deliberate introduction of contradictory information will reduce coordination capacity, and that reducing such information (or building resilience to it) will restore coordination capacity. These predictions can be tested experimentally and observationally.
Objection 8: The account is technologically deterministic. By emphasizing social media platforms, algorithmic amplification, and network fragmentation, the theory may overstate technology's causal role. Labor movements coordinated without Twitter. The civil rights movement achieved remarkable solidarity without Facebook. Independence movements expelled colonial powers without algorithmic recommendation systems. If collective action was possible before digital technology, perhaps technology is less important than claimed--and perhaps offline social infrastructure remains the decisive factor.
Response: This objection correctly identifies that collective action predates digital technology, but it misunderstands the theory's claims. The argument is not that technology enables confusion-based manipulation for the first time. It is that technology changes the scale, speed, and cost structure of both manipulation and coordination.
Pre-digital collective action faced different constraints. Information traveled slowly; manipulation was expensive; counter-narratives had time to develop; local communities could verify claims through direct observation and trusted intermediaries. The civil rights movement, for example, benefited from Black churches, historically Black colleges, and dense community networks that provided both coordination infrastructure and epistemic resilience. Manipulation existed, but it operated on timescales that allowed for collective response.
Digital technology compresses these timescales dramatically. The firehose can now generate more contradictory claims in an hour than twentieth-century propagandists produced in a year. Algorithmic amplification spreads content before verification is possible. The economics have shifted: generating confusion has become cheap while building coordination capacity remains expensive and slow.
Moreover, digital technology has partially displaced the offline infrastructure that once provided resilience. Union membership has declined. Religious attendance has fallen. Local newspapers have collapsed. Neighborhood associations have weakened. The "bowling alone" phenomenon--declining participation in civic organizations--leaves people more dependent on digital networks that are more easily manipulated.
The theory does not claim technology determines outcomes. It claims technology shifts the playing field in ways that favor manipulation over coordination. The appropriate response is not to reject technology but to rebuild coordination infrastructure--both online and offline--suited to the current environment. The pre-digital past demonstrates that coordination is possible; it does not demonstrate that pre-digital strategies will work in post-digital conditions.
V. Countermeasures: A Taxonomy
If confusion-based manipulation targets cognitive bandwidth, trust, and coordination capacity, effective countermeasures must address these same targets. This section offers a taxonomy, distinguishing interventions at the individual, network, and institutional levels.
A. Individual-Level Interventions
1. Inoculation (Prebunking)
Research demonstrates that prebunking--exposing people to weakened forms of manipulation before they encounter it in the wild--is more effective than debunking--correcting false claims after exposure. The psychological mechanism parallels vaccination: exposure to weakened pathogens builds immunity.
Prebunking works because it targets the technique rather than the content. Teaching people to recognize the firehose pattern ("you will see many contradictory claims designed to overwhelm you") is more durable than correcting any particular false claim. The lesson generalizes; the fact-check does not.
2. Cognitive Awareness Training
Training people to recognize their own cognitive vulnerabilities--the illusory truth effect, the availability heuristic, confirmation bias--can reduce susceptibility. When people understand that familiarity feels like truth, they can consciously adjust their intuitions.
Such training is most effective when it includes concrete examples and practice opportunities, not merely abstract instruction. The goal is to convert explicit knowledge ("I know this bias exists") into practical skill ("I notice when it is operating").
3. Source Literacy
Traditional media literacy focused on evaluating content: Is this claim true? Is this evidence sufficient? Source literacy shifts focus to evaluating sources: Who produced this? What are their incentives? What is their track record?
In high-volume information environments, source evaluation is more efficient than claim-by-claim evaluation. A trusted source can be relied upon without independent verification of each claim; an unreliable source can be ignored or heavily discounted. Developing source literacy reduces cognitive load while maintaining epistemic vigilance.
B. Network-Level Interventions
1. Strengthening Peripheral Connections
Sonin's research shows that isolated individuals are most vulnerable to manipulation. The countermeasure is deliberate outreach to those on the social periphery--not to persuade them of particular claims, but to integrate them into networks where verification is possible.
This is harder than it sounds. Isolated individuals may be isolated for reasons: geographic, linguistic, cultural, or ideological. Effective outreach requires meeting people where they are, addressing barriers to connection, and building trust incrementally. It is social work as much as information work.
2. Cross-Cutting Exposure
Filter bubbles fragment shared understanding. The countermeasure is deliberate exposure to perspectives outside one's usual information diet--not to adopt those perspectives, but to understand how others see the world and to identify potential common ground.
Research suggests that forced exposure to opposing views can backfire, increasing polarization. More effective is voluntary exposure facilitated by design: algorithms that surface "bridging" content, platforms that reward cross-group engagement, and social practices that normalize respectful disagreement.
3. Local Verification Networks
When national or global information is contested, local information often remains verifiable. People can observe their communities directly. They know their neighbors. Local verification networks--community groups, neighborhood associations, local journalism--provide an anchor for shared understanding that national media cannot.
Strengthening local information infrastructure is thus a countermeasure against national-level confusion. If people can trust their local networks, they have a foundation for evaluating claims about more distant events.
C. Institutional-Level Interventions
1. Platform Architecture Reform
Social media platforms currently optimize for engagement, which often means optimizing for outrage, conflict, and emotional arousal. Confusion-based manipulation exploits these incentives.
Architectural reforms might include: reducing algorithmic amplification of divisive content; introducing friction for sharing (requiring users to read articles before sharing them, for example); and surfacing context alongside content (labeling sources, showing related coverage, indicating the age of stories being shared).
Such reforms involve tradeoffs. Engagement optimization generates revenue; friction reduces it. But the tradeoffs are not necessarily unfavorable. Platforms that become known for manipulation and outrage may suffer long-term reputational and regulatory costs that outweigh short-term engagement gains.
2. Transparency Requirements
Manipulation is easier when it is invisible. Transparency requirements--disclosure of funding sources, identification of automated accounts, labeling of state-sponsored media--raise the costs of covert influence operations.
Such requirements have limits. Sophisticated actors adapt. Disclosure can be circumvented through intermediaries. But transparency shifts the burden: manipulation that must be hidden is more expensive than manipulation that can operate openly. Even incomplete transparency is better than none.
3. Counter-Speech Infrastructure
In the United States and many democracies, content-based censorship raises serious First Amendment concerns. An alternative is counter-speech: meeting bad speech with better speech rather than suppression.
Effective counter-speech requires infrastructure. Fact-checking organizations need funding and institutional support. Rapid-response capacity is necessary to match the speed of the firehose. Trusted voices--scientists, community leaders, public health officials--need platforms and resources.
The goal is not to out-shout the propagandists but to provide clear, consistent, trustworthy alternatives. In the behavioral economics framing, counter-speech provides a decision anchor--a reliable reference point against which competing claims can be evaluated.
D. What Does Not Work (and Why)
Before examining AI's role in depth, it is worth noting that not all intuitive responses are effective. Three deserve mention:
1. Fact-Checking Alone
Fact-checking is valuable but insufficient. The illusory truth effect persists even after correction. By the time a fact-check appears, the false claim has already been repeated thousands of times, generating familiarity that registers as credibility. Fact-checking runs on the propagandist's timeline, always reactive, always behind.
Fact-checking is most effective when integrated with prebunking (addressing techniques, not just claims) and when delivered by trusted sources through trusted channels.
2. Censorship
Removing false content has intuitive appeal but creates serious problems. It raises free speech concerns, empowers the censor, and can backfire through the "Streisand effect" (banned content becomes more attractive because it is banned). Those who distrust institutions already will distrust institutional judgments about what to censor.
More fundamentally, censorship addresses content when the problem is often technique. Removing one false claim does little when thousands more can be generated. The arms race favors the propagandist.
3. Matching Volume with Volume
Some suggest countering the firehose with a counter-firehose: flooding the information space with accurate content to compete for attention. This strategy misunderstands the problem. The firehose succeeds not by winning arguments but by exhausting attention. A counter-firehose adds to the cognitive load rather than reducing it. The result is more noise, not more signal.
The effective response to overwhelming volume is not more volume but better curation: trusted sources, clear signals, and reduced overall information load. Quality beats quantity when the goal is coordination rather than persuasion.
VI. The AI Countermeasure: Epistemic Infrastructure at Scale
The countermeasures outlined above--prebunking, source literacy, network strengthening, platform reform--share a common limitation: they require human effort that does not scale. Training individuals in cognitive awareness is valuable but slow. Building local verification networks is essential but labor-intensive. The firehose, by contrast, scales effortlessly. Automated systems can generate contradictory content faster than human institutions can respond.
This asymmetry suggests that effective countermeasures may require automation of their own. AI systems--the same technology that amplifies confusion--might also help counteract it. But this possibility requires careful analysis. AI is not neutral infrastructure; it can serve either side of the information war.
A. AI as Amplifier of Confusion
First, the threat. Large language models have dramatically reduced the cost of generating plausible-sounding content at scale. The firehose no longer requires human operators crafting each message; it can be automated. Synthetic text, images, and video can flood information channels faster than ever.
More subtly, AI systems can personalize confusion. Rather than broadcasting the same contradictory messages to everyone, automated systems can tailor confusion to individual vulnerabilities--exploiting each person's specific cognitive biases, social connections, and information diet. Personalized confusion is harder to recognize and harder to counter than broadcast propaganda.
The network fragmentation problem also worsens. Algorithmic recommendation systems, optimized for engagement, naturally amplify divisive content. When these systems incorporate generative AI, they can create infinite variations of engaging-but-fragmenting content, each targeted to a specific micro-audience. The filter bubbles become smaller and more sealed.
B. AI as Epistemic Infrastructure
Yet the same capabilities that amplify confusion might be redirected toward epistemic repair. Consider several possibilities:
1. Automated Prebunking
If prebunking works better than debunking, and if the bottleneck is reaching people before they encounter manipulation, AI systems could help. An AI assistant that recognizes manipulation techniques in real-time--flagging firehose patterns, identifying coordinated inauthentic behavior, noting when claims contradict other claims from the same source--could function as a cognitive prosthetic, extending human verification capacity.
The key insight from the behavioral economics literature is that prebunking works by teaching techniques, not correcting claims. AI systems trained to recognize manipulation techniques could scale this teaching indefinitely. Rather than fact-checking each false claim (a losing battle), they could help users recognize why certain information patterns are suspect.
2. Trust Network Mapping
The cynicism model succeeds by eroding generalized trust. But trust is not monolithic; people maintain different trust relationships with different sources for different topics. AI systems could help users map and maintain their trust networks--tracking which sources have been reliable on which topics, noting when trusted sources contradict each other, identifying experts within one's extended social network.
This is social epistemology as a service: helping individuals leverage collective knowledge without being overwhelmed by collective noise. The goal is not to tell people what to believe but to help them identify who they have reason to trust and why.
3. Coordination Facilitation
The deepest problem identified in this article is not individual belief but collective coordination. People cannot act together if they cannot find each other and establish common knowledge. AI systems could lower coordination costs--helping people identify others who share their concerns, facilitating communication across fragmented networks, and establishing shared reference points for collective sensemaking.
This is delicate territory. The same tools that facilitate coordination for legitimate collective action could facilitate coordination for harmful purposes. Platform design and governance matter enormously. But the underlying capability--reducing the friction of finding like-minded others and establishing common knowledge--could shift the asymmetry back toward collective action and away from atomized confusion.
4. Synthesis as the New Primary Source
Perhaps the most profound shift is already underway: people are increasingly consuming AI-generated syntheses rather than primary information. When a user asks an AI assistant "What's happening with X?", they receive a consolidated summary rather than a list of sources to evaluate. The synthesis becomes the information.
This has ambivalent implications for confusion-based manipulation.
On one hand, synthesis could neutralize the firehose. If an AI system processes hundreds of contradictory claims and produces a coherent summary that identifies points of genuine agreement, flags unresolved disputes, and notes which claims lack credible sourcing, it performs the cognitive labor that the firehose exploits. The user never experiences the overwhelming volume; they see only the distilled result. Confusion-by-volume fails when volume is compressed.
Similarly, synthesis could counteract fragmentation. If users across different filter bubbles query the same AI systems and receive broadly similar syntheses, a form of common knowledge re-emerges. The AI becomes a shared epistemic reference point--not because it is authoritative, but because it is common. People can coordinate around shared syntheses even when they cannot coordinate around shared primary sources.
On the other hand, synthesis introduces new vulnerabilities. If people stop evaluating primary sources entirely, the AI system becomes a single point of epistemic failure. Whoever controls the synthesis controls the narrative. The firehose targeting primary information channels becomes irrelevant; what matters is influencing the synthesizer.
This creates a new attack surface: synthesis manipulation. Rather than flooding the information space with contradictory claims, sophisticated actors might focus on poisoning the training data, exploiting model biases, or finding prompts that reliably produce favorable syntheses. The manipulation becomes invisible because users never see the underlying chaos--they see only a calm, confident summary that happens to serve particular interests.
Moreover, synthesis risks accelerating the cynicism problem. If people know that AI systems can be manipulated, and if they cannot verify syntheses against primary sources they no longer consume, they may distrust syntheses entirely. The result is the same learned helplessness, now directed at the very tools meant to help.
The challenge, then, is to build synthesis systems that are transparent about their limitations, that invite verification rather than discouraging it, and that distribute the synthesis function across multiple independent systems rather than concentrating it. Epistemic monoculture is as dangerous as epistemic fragmentation.
5. Epistemic Transparency
AI systems could make their own reasoning transparent in ways that model good epistemic practices. When an AI assistant says "I found three sources that agree on X, but two that disagree," it demonstrates the kind of source triangulation that humans should practice. When it says "I'm uncertain about Y because the evidence is conflicting," it models appropriate epistemic humility.
This is not about AI systems telling people what to think. It is about AI systems showing how to think--demonstrating verification practices, source evaluation, and appropriate confidence calibration in every interaction. Over millions of interactions, such modeling could shift epistemic norms.
C. The Infrastructure Question
The deeper question is not whether AI could help, but whether it will. Technology does not deploy itself; it is shaped by institutions, incentives, and choices.
Currently, the dominant AI deployment model optimizes for engagement and advertising revenue--the same incentives that drive platform fragmentation. AI assistants are designed to be helpful to individual users, not to strengthen collective sensemaking capacity. The business model does not reward epistemic infrastructure.
Changing this requires recognizing that epistemic infrastructure is a public good--like roads, courts, or public health systems. Markets alone will not provide it adequately. Some combination of public investment, regulatory pressure, and institutional innovation is necessary.
The analogy to public health is instructive. We do not rely solely on individual choices to prevent epidemics; we build systems--sanitation, vaccination programs, disease surveillance--that protect collective health regardless of individual decisions. Similarly, protecting collective sensemaking may require building systems that function as epistemic public health infrastructure.
AI could be part of that infrastructure. But only if we choose to build it that way.
VII. Conclusion: The Stakes of Synthesis
This article has argued that contemporary information manipulation often operates through confusion rather than persuasion. Three mechanisms--the firehose model, the cynicism model, and the fragmentation model--work together to prevent coordinated opposition without requiring anyone to believe official claims. The target is not individual belief but collective sensemaking capacity.
The behavioral economics synthesis is clear: cognitive load depletes verification capacity, learned helplessness discourages political engagement, and confirmation bias in fragmented networks prevents convergence on shared understanding. The result is a population that cannot coordinate--not because they are fooled, but because they cannot find each other.
The key insight, drawn from Sonin's research, bears repeating: "Manipulation does not require mass belief. It requires only enough confusion, cynicism, or disconnection to prevent coordinated opposition."
The inverse is equally important: Effective response does not require perfect information. It requires enough trust, connection, and coordination capacity to act together despite uncertainty.
Countermeasures exist at individual, network, and institutional levels. Prebunking outperforms debunking. Strengthening peripheral connections reduces vulnerability. Platform architecture reform and transparency requirements raise the costs of manipulation. Counter-speech infrastructure provides reliable reference points for evaluating competing claims.
None of these countermeasures is sufficient alone. But together, they suggest that fatalism is unwarranted. Confusion-based manipulation is a technique--identifiable, analyzable, and resistible. Understanding the mechanism is the first step. Building institutions that preserve coordination capacity despite confusion is the essential next step.
The challenge is substantial but not insuperable. In the end, the most effective response to confusion-based rule is not certainty but solidarity--not perfect knowledge but functional trust--not winning every argument but maintaining the capacity to act together.
That capacity is worth preserving.