What agent-based modeling reveals about how public perception moves and why it moves differently for different audiences.

Reputation is one of the most consequential variables in institutional life (for companies, governments, and public figures alike) and one of the least well-measured. Traditional approaches produce a periodic snapshot: a survey fielded quarterly, a tracking study run annually. The world moves faster than that. So does public opinion.

We’ve been exploring a different approach: treating reputation not as a survey score to be periodically sampled, but as a price to be continuously discovered. The architecture borrows from financial market theory, specifically the idea that prices in liquid markets aggregate dispersed information efficiently, reflecting collective judgment in real time. The question we’ve been working to answer is whether the same mechanism can be applied to public perception.

The short answer, based on our research to date, is yes, with important caveats. The longer answer is more interesting.

The Core Premise

The idea is straightforward in concept, complex in execution. A large population of simulated agents, each with a distinct demographic profile, psychological disposition, media diet, and value system, continuously processes information about a set of tracked entities. As new information arrives, agents update their perception of each entity’s reputational value. When that perceived value diverges from the current market price, they trade: buying if they believe the entity is undervalued, selling if they believe it is overvalued.

The result is a continuous price signal for each entity’s reputation, emergent from thousands of independent decisions rather than designed by any single analyst.

What makes this approach analytically interesting isn’t the mechanism (prediction markets have existed for decades). It is the agent heterogeneity. Because agents differ in how they process the same information, the system captures something traditional surveys can’t: the fact that reputational events don’t land uniformly. The same news produces different reactions from different audiences, and those differences are often where the real signal lives.

Fifteen Days in April

We’ve been running this simulation continuously, and the two-week period from April 1 to 15, 2026 produced some of the most instructive dynamics we’ve observed. The period coincided with the escalating conflict involving Iran, and the simulation’s response to that environment was revealing. The dominant signals were the Iran conflict (cited in roughly 30% of sell decisions across war, conflict, and Iran-specific references), entity-specific threat exposure, and Trump administration statements about the trajectory of the war.

Across the entities we track, US consumer brands sustained consistent sell pressure through the first week of April. The pattern was not uniform; it varied by sector, by the specific associations agents had built over time with each entity, and by how different agent segments weighted geopolitical risk against values-based signals. Boeing is a notable case: agents cited Iranian Revolutionary Guard Corps threats against the company directly, producing sell pressure with a distinct character from the broader market-wide risk repricing affecting consumer brands.

Non-US technology companies, by contrast, saw net positive sentiment during the same period. Trump’s public statements about ending the war appear in agent reasoning as a discrete catalyst for brief risk-on reversals, visible as short-lived buying spikes on April 6 and April 13 across several entities. The simulation did not have the conflict scripted into it. It responded to incoming information signals the same way the agents’ constituent human archetypes would have.

Three Weeks of Tesla

The most instructive single case from this period is Tesla, not because the outcome was surprising, but because the mechanism was so legible.

Tesla opened April under sell pressure. The proximate cause was clear in agents’ stated reasoning: Q1 2026 delivery numbers had missed expectations, and the news drove a sharp, swift selling response, particularly among momentum-oriented agents for whom the delivery miss represented a trend reversal signal. The response was fast and concentrated in the first few days of the month.

Then, on April 13, something shifted. The sell pressure flipped to buying, not universally, but significantly. What drove it? Agents citing Tesla’s approval for supervised self-driving software in the Netherlands, combined with data showing Tesla gaining European market share even as overall sales volumes fell. Different agents weighted this differently: some saw a long-term technology story that the market was undervaluing; others remained skeptical. But the net effect was a brief, pronounced reversal in sentiment.

April 14 reversed it again. Tesla’s Q1 earnings disappointed, and the selling resumed, sharper than before, and with a different character. Where the April 3 selling was primarily driven by momentum traders responding to a data signal, the April 14 selling included a distinct cohort of value-conscious agents citing governance concerns. Same entity, same general direction of trade, different underlying reasoning.

This three-phase sequence (delivery miss, regulatory catalyst, earnings disappointment) compressed into fifteen days illustrates what the simulation is designed to surface: not just that sentiment moved, but why it moved, and which audiences drove each movement. A traditional survey fielded at the start and end of this period would show net negative movement for Tesla. It would miss the reversal entirely, along with what that reversal reveals about where residual Tesla confidence actually lives.

The Contrarian Case: Boeing

Tesla’s volatility makes it a natural focal point, but the more structurally interesting case from this period may be Boeing.

Boeing’s reputation price moved consistently in one direction throughout the entire fifteen-day window: down. There were no reversals, no recovery moments, no days where net buying sentiment turned positive. The pressure was steady, persistent, and distributed across agent types, not concentrated among any single segment.

This kind of sustained, undifferentiated negative sentiment is qualitatively different from the volatile, segment-specific movements we see in something like Tesla. It suggests a reputational condition that has become, in some sense, baseline; not reactive to specific news events, but reflecting a durable shift in how agents across demographic and psychological profiles assess the entity’s standing. It is harder to recover from than event-driven damage, because there is no discrete event to respond to.

What Prediction Markets Add, and Do Not Add

We want to be careful not to overclaim what this architecture produces. Several important caveats are worth stating directly.

First, the simulation’s outputs are sensitive to the quality and representativeness of agent profiles. Agents that don’t accurately reflect how real population segments process information will produce systematically distorted price signals. This is not a small caveat; it is the central technical challenge of the approach, and it is why the underlying population simulation methodology matters enormously.

Second, the continuous nature of the signal creates interpretive temptations. Not every price movement is meaningful. Short-term volatility in the simulation can reflect noise in information inputs as much as genuine shifts in population sentiment. Distinguishing between signal and noise requires the same discipline one would apply to any high-frequency data stream.

Third, and perhaps most importantly: the simulation measures reputational perception as constructed by agents responding to news signals. It does not directly measure behavior (purchase intent, voting behavior, investor action). The relationship between reputation prices and downstream outcomes is an empirical question that requires ongoing validation against real-world data.

With those caveats stated, we think the approach offers something genuinely useful that traditional methods do not provide: a continuous, diagnostically rich view of how different audiences respond to the same information environment in real time. Not as a replacement for survey research, but as a complement to it, a way of generating hypotheses about where reputational pressure is coming from and which audiences are driving it, before a full survey can be fielded.

The Broader Research Question

The deeper question the simulation forces into focus is one that traditional reputation research tends to sidestep: whose perception counts, and when?

Standard reputation indices aggregate across populations. They tell you the average. But reputational risk rarely operates at the average. It concentrates in specific segments (a particular age cohort, a particular values orientation, a particular media ecosystem) and the aggregate score often obscures more than it reveals. A company can hold a stable average reputation while experiencing a serious, growing deficit with a specific audience that happens to be commercially or regulatorily consequential.

An agent-based architecture is, at its core, a way of preserving that heterogeneity rather than averaging it away. The price signal it produces is an aggregate, but the underlying data retains the segment structure, enabling the kind of diagnostic decomposition that most reputation tracking systems cannot support.

That capability, we think, is where the most interesting applications of this research will emerge: not in replacing the aggregate reputation score, but in making the question “which audiences are driving this, and why?” answerable in real time.