What synthetic audiences reveal about the oldest unsolved problem in market economies.
Yuval Noah Harari’s argument in Sapiens is not primarily about biology or evolution. It is about information. The cognitive revolution that separated Homo sapiens from every other species was, at its core, a capacity upgrade: the ability to construct and coordinate around shared fictions (myths, currencies, legal entities, brand identities) that exist nowhere in physical reality but organize the behavior of millions of people with extraordinary efficiency.
Capitalism, in Harari’s framing, is the most successful shared fiction humanity has produced. It runs on a single recursive insight: the belief that the future will be more productive than the present justifies borrowing against that future to invest in it today, which in turn makes the future more productive, which validates the original belief. Credit, equity, and markets are all institutional mechanisms for sustaining that loop. The engine of capitalist growth is not capital per se; it is the credible anticipation of future value.
What threatens that engine is not scarcity of resources but misallocation of them. And the primary cause of misallocation, across every sector, every market, and every era of capitalist development, is informational uncertainty about what people actually believe, want, and will respond to.
This is not a peripheral problem. It is the central problem.
The Imagined Order Has a Signal Problem
Harari described the shared fictions that organize economic life as “imagined orders”—belief systems that are real in their consequences even though they exist only in the minds of those who hold them. Money is valuable because everyone believes it is valuable. A brand carries equity because consumers attach meaning to it. A message persuades because it lands on the right cognitive terrain in the right audience at the right moment.
The production of imagined orders (building brands, crafting communications, developing products, shaping institutional narratives) is one of the largest categories of economic activity in advanced economies. Advertising alone exceeds $800 billion annually worldwide. Add market research, public affairs, strategic communications, content production, and the internal resources organizations spend on positioning and messaging, and the figure grows by an order of magnitude.
The striking fact is how much of this investment operates with almost no reliable signal about whether the imagined orders being constructed are taking hold. Traditional research methods (surveys, focus groups, message testing) are periodic, expensive, geographically constrained, and structurally slow. The feedback loop between a message and a market response has historically been measured in weeks or months. By the time an organization learns that its communication is failing, significant capital has already been misallocated.
This productivity consequence is not abstract. Every dollar spent producing content that fails to resonate, every campaign built on an incorrect model of audience beliefs, every product positioned against a market assumption that doesn’t hold: all of these represent direct drag on the growth loop that capitalist economies depend on. The imagined order problem is a productivity problem.
Compressing the Feedback Loop
The first and most direct intervention synthetic audiences make against this problem is speed. Limbik’s Synthetic Response product returns resonance scores across defined audience segments in real time—seconds, not weeks. In economic terms, this is a velocity multiplier applied to one of the slowest feedback loops in the capitalist system.
The significance of this is easy to understate. In Harari’s account, the scientific revolution and capitalism co-evolved precisely because systemic inquiry reduced uncertainty about the physical world, and reduced uncertainty enabled more efficient capital allocation. The telescope, the microscope, double-entry bookkeeping, actuarial tables—each was, at its core, a tool for generating signals where there had previously been noise.
Real-time resonance scoring is that class of tool applied to the softest, historically least measurable variable in the system: how a specific message lands with a specific human population. The ability to compress message testing from a weeks-long field research cycle to a sub-second API call does not just make existing workflows faster. It changes what kinds of questions it is economically rational to ask. Organizations that previously could test one message strategy can now test fifty. The search space for effective communication expands in proportion to the reduction in search cost.
This is not incremental improvement. It is a structural change in the information economics of audience understanding.
Stress-Testing Imagined Orders Before Committing Capital
Synthetic Research extends this logic from individual messages to strategic scenarios. Where Synthetic Response answers “how will this specific content land with this specific audience,” Synthetic Research enables organizations to simulate how complex audience segments will process and respond to multi-step communications, reputational events, or policy positions before any of those things have happened.
Harari’s cognitive revolution gave humans the capacity to plan against imagined futures, to simulate, anticipate, and prepare. Synthetic Research is an institutional operationalization of that capacity, applied to the problem of audience understanding at scale. It allows organizations to ask “will this imagined order stick?” before committing the capital required to build it.
The economic significance is analogous to what computational simulation has done for engineering and pharmaceutical development. In both fields, the ability to model outcomes before physical production reduced the cost of failure dramatically, which in turn enabled more ambitious experimentation. The expected value of a research program increases when the cost of a failed hypothesis falls. The same logic applies to communications strategy, product positioning, and institutional reputation management. When organizations can model how their audiences will respond to a scenario they haven’t yet executed, the cost of being wrong at the planning stage drops toward zero, and the cost of being wrong at the execution stage drops with it.
The aggregate productivity effect of this shift, measured across the hundreds of billions of dollars spent annually on communication and positioning, is not small.
The Deepest Lever: Encoding Cognitive Diversity Into AI
Limbik’s Synthetic Labels product operates at a different level of the stack, and its productivity implications are correspondingly more fundamental.
Harari’s account of Homo sapiens culminates in a question he never fully answers: what comes after the cognitive revolution? The tentative answer he offers in later chapters is artificial intelligence, not as a tool humans wield but as an extension of collective human cognition that increasingly operates autonomously. If language allowed humans to encode shared meaning across time and space, machine learning encodes that meaning into systems that can process it at a scale no individual human, and no human institution, can match.
The productivity implications of AI are already significant and widely discussed. What is less widely discussed is the alignment problem that limits those implications. Large language models and other AI systems are trained primarily on text produced by the populations that generated most of the world’s written content, a set that skews heavily toward particular languages, geographies, educational backgrounds, and cultural frameworks. The resulting systems are not cognitively diverse. They reflect a narrow slice of how humans actually process information, make decisions, and form beliefs.
This is not a philosophical concern. It is an economic constraint. An AI system that does not accurately model how a Vietnamese farmer, a Brazilian healthcare worker, or a Nigerian urban professional processes information is an AI system that will produce systematically biased outputs when applied to decisions involving those populations. And as AI is increasingly deployed in customer service, public health communication, financial products, and government services, the populations underrepresented in training data are precisely the populations most affected by those deployments.
Synthetic Labels addresses this constraint at its root. By generating training data that reflects the cognitive diversity of actual human populations—specifically across the dimensions Limbik models with its Foundation Mapping methodology—Synthetic Labels provides a mechanism for producing AI systems that work better for more people. In Harari’s terms, it encodes more of the actual diversity of human cognition into the systems that are becoming the substrate of knowledge work.
The productivity multiplier here operates through AI capability itself. A better-aligned AI is a more accurate tool. A more accurate tool produces less error. Less error means better decisions, better allocation of resources, and faster progress through the growth loop. The downstream productivity gains from more cognitively representative AI training data are not limited to any single application or industry; they propagate through every system that uses AI to process human-generated inputs or produce human-facing outputs.
What Happens When the Uncertainty Is Removed
The case for synthetic audiences as a productivity technology is straightforward: they reduce the most persistent and expensive uncertainty in capitalist market systems. But the more important question is the second-order one. What does the world look like when every organization has real-time access to accurate models of audience cognition?
The first thing that changes is the structure of the communications industry itself. Right now, agencies and consultancies sell expertise: judgment about what will work, derived from experience and pattern recognition. When accurate audience models are universally accessible, that expertise becomes a commodity. The value migrates from knowing what resonates to knowing what to do with resonance data—strategy, execution, speed, creative differentiation. This has already happened in quantitative finance, where alpha generation moved from discretionary traders to systematic strategies once market data became cheap and ubiquitous. The same transition is coming for communications, and the firms that survive will be the ones that operate at higher velocity and tighter tolerances, not the ones with the best intuition.
The second thing that changes is institutional behavior. Right now, governments, corporations, and advocacy organizations operate with significant lag between action and audience response. That lag creates space for both strategic risk-taking and catastrophic misreads. When every organization can simulate audience response before committing to a position, the rate of unforced errors drops—the comms disasters that result from leadership being overconfident about how a message will play. But the variance in institutional communication compresses simultaneously. Fewer organizations say things that badly miss their audience; fewer say things that are surprising or novel, because surprise and novelty carry measurable risk that shows up in the model.
The third challenge is the most structurally significant, and the one Harari’s framework anticipates most clearly. His imagined orders depend on a certain level of ambiguity to function. Money works because people don’t interrogate it too closely. Political coalitions work because voters project their own priorities onto candidates who are deliberately imprecise. When every message is optimized for maximum resonance with a specific audience, that productive ambiguity collapses. The shared fictions that organize large-scale cooperation become harder to construct, because there is no longer a single message that works across a diverse population—there are fifty messages, each optimized for a different segment, and those segments stop talking to each other.
The optimistic scenario is that organizations use real-time audience models to identify the messages that genuinely resonate across diverse populations, rather than defaulting to the ones that poll well with narrow elites. That requires discipline and a willingness to be surprised by what the data shows. The pessimistic scenario is that the technology gets used the way most communication technology gets used: to confirm existing priors, avoid risk, and segment audiences into ever-narrower silos that are easier to manage and harder to unite.
What is not in question is that this is a phase transition, not an incremental improvement. The organizations, governments, and movements that understand it early and build for it will have a structural advantage over the ones that treat synthetic audiences as a faster version of the old tools. Synthetic audiences are not a faster survey. They are a different epistemic environment. And the institutions that succeed in it will be the ones that redesign themselves around real-time feedback about human belief—not quarterly research reports and post-campaign autopsies.