A Season Unlike Any Other
To understand what Americans think about the artificial intelligence platforms competing for their attention, their data, and increasingly their purchasing decisions, it helps to understand the moment in which they were asked.
The spring of 2026 was, by any reasonable measure, the most compressed period of product launches, commercial milestones, and political theater the AI industry has yet produced. Between February and late April, Anthropic — the maker of Claude — executed a campaign that blended brand identity, enterprise growth, and Washington conflict into a single, highly visible narrative. On February 4, the company pledged that Claude would remain ad-free. On March 5, it publicly framed its break with the Pentagon after being labeled a supply-chain risk. A week later, it announced a $100 million Claude Partner Network. By early April, it reported that demand had pushed run-rate revenue above $30 billion, with the number of enterprise customers spending more than $1 million annually doubling to over 1,000 in less than two months.
Then April fused product momentum with political spectacle. Anthropic launched Project Glasswing on April 7 as a controlled rollout around its Mythos cybersecurity model, followed with Claude Opus 4.7 on April 16 and Claude Design on April 17. Its fight with the Trump administration — which had escalated from a March 9 lawsuit to a temporary court win on March 26 and an appellate setback on April 8 — culminated in Trump's April 21 comment that Anthropic was "shaping up" and could still work with the Pentagon.
OpenAI, for its part, was shipping just as aggressively. GPT-5.4 Thinking arrived on March 5, interactive math-and-science tools on March 10, and shopping upgrades on March 24. ChatGPT still claims more than 900 million weekly active users. Similarweb data for March 2026 ranks chatgpt.com first and claude.ai third among U.S. AI chatbot sites, with an 84.1% to 15.9% traffic split.
It was inside this window — a period of maximum visibility for Anthropic and sustained dominance for OpenAI — that a nationally fielded survey of roughly 1,500 American adults asked a deceptively simple question: Regardless of whether you have used them, how much do you generally trust each of the following AI platforms?
The answers reveal a market that is more stratified, more politically textured, and more structurally constrained than either the bullish Anthropic narrative or the incumbent ChatGPT story would suggest on their own.
The Headline Numbers: A Clear Hierarchy, With Caveats
The topline results establish a four-tier trust landscape. ChatGPT leads with 66.9% net trust (the share who say they strongly or somewhat trust it, minus those who distrust it). Claude sits in a solid but distant second at 52.3%. DeepSeek registers 43.0%, and LeChat — the Mistral-backed platform — trails at 36.2%.
This chart shows the three key metrics for each platform. The gray bars — the share of respondents who have never heard of the platform — are as important as the blue ones. ChatGPT enjoys near-universal awareness at 93.5%. Claude, despite its spring of maximum visibility, is still unknown to roughly one in five Americans (19.4%). DeepSeek and LeChat fare worse, with "never heard of" shares of 22.6% and 32.4% respectively.
The 14.6-point trust gap between ChatGPT and Claude is the headline, but it is not the whole story. What makes the data analytically interesting — and what connects it to the turbulent context of the last few months — is what happens when you look underneath the topline.
The Education Gradient: AI Trust as a Marker of Information Access
If there is a single demographic variable that reshapes the AI trust landscape most dramatically, it is education. The pattern is consistent, steep, and holds across all four platforms: the more formal education a respondent has, the more they trust AI — not just the market leader, but the category as a whole.
Among respondents with a postgraduate degree (n = 176–177), ChatGPT's net trust reaches 81.2% — 14.3 points above the overall topline. Claude hits 75.1%, a striking 22.8 points above its own topline. Even DeepSeek, the platform with the highest distrust in the overall sample, earns 66.5% net trust among postgrads, and LeChat reaches 50.0%.
At the other end of the spectrum, respondents with some college, an associate's degree, or trade education (n = 465–472) fall meaningfully below the topline on every platform. Their Claude net trust is just 40.3% — nearly 35 points below postgrads. Their aggregate AI trust index, a composite score computed across all four platforms, is essentially zero (-0.01), meaning their trust and distrust across the category roughly cancel out.
An aggregate trust index — scored from -2 (strongly distrust everything) to +2 (strongly trust everything) — makes the gradient vivid:
| Education level | Aggregate AI trust index |
|---|---|
| Postgraduate | 0.87 |
| Bachelor's degree | 0.39 |
| High school or less | 0.26 |
| Some college / associate / trade | -0.01 |
The mechanism here likely blends exposure and cognitive framing. Higher-education respondents are more likely to have encountered AI tools in professional and academic settings. They may also bring a more differentiated mental model of what these platforms actually do — one that allows them to evaluate specific capabilities rather than defaulting to blanket suspicion. The result is that familiarity breeds trust, not contempt, and it does so across the entire competitive set, not just for the market leader.
This has a direct implication for how to read Anthropic's spring campaign. The audiences most likely to have absorbed the partner network announcement, the revenue milestones, the Pentagon drama, and the rapid product cadence are precisely the audiences that already show the highest trust in Claude. The campaign was, in effect, preaching to a well-educated choir — and the data suggests the choir was listening.
The Income Story: Parallel but Softer
Income tracks in the same direction as education, though the effects are somewhat smaller — consistent with the fact that income and education are correlated but not redundant.
Households earning $150,000 or more (n = 287–292) register 78.8% net trust in ChatGPT and 65.4% in Claude — 11.9 and 13.1 points above the respective toplines. Their aggregate trust index is 0.59, more than four times the score of households earning under $50,000 (0.14). The income gradient is monotonic: trust rises at every step from under $50,000 to $150,000-plus.
The income data reinforces a finding that deserves attention from anyone interpreting these results through an equity lens. The populations most likely to benefit from AI assistance — those with fewer resources, less access to professional advice, and more to gain from automated tools — are also the ones who trust AI the least. Lower-income respondents are not just less trusting of Claude or ChatGPT specifically; they are less trusting of the entire category. And they are substantially more likely to have never heard of the challenger platforms at all: 23.6% of respondents in households earning under $50,000 have never heard of Claude, compared to 12.8% of those earning $150,000 or more.
This is not merely a marketing problem. It is a structural feature of how AI trust is distributed in the American public, and it suggests that the benefits of the current AI boom are accruing disproportionately to those who already have the most.
The Political Landscape: More Textured Than the Headlines Suggest
The political breakdowns are where the data becomes most surprising — and most relevant to the spring's dominant narrative of Anthropic versus the Trump administration.
The conventional expectation, given the highly publicized conflict between Anthropic and the White House, might be that Republican-aligned respondents would show depressed trust in Claude relative to other platforms, and that Democratic-aligned respondents would show elevated trust. The data tells a more complicated story.
This chart shows how the ChatGPT-Claude trust gap varies across the subgroups where the spring's events were most likely to register. Among postgrads, the gap narrows to 6.1 points. Among liberal Republicans, it nearly vanishes (2.4 points). Among progressive Democrats, it is just 5.9 points. But among MAGA Republicans — the group most aligned with the administration that was actively fighting Anthropic — the gap is 11 points, and Claude's net trust is still a robust 59.4%, seven points above the overall topline.
The MAGA Finding That Defies the Narrative
The most counterintuitive result in the entire dataset is that MAGA Republicans (n = 187–189) are not the AI-skeptical group the media narrative would predict. Their aggregate AI trust index is 0.44 — essentially identical to progressive Democrats (0.43) and substantially higher than moderate Democrats (0.26). On Claude specifically, MAGA Republicans register 59.4% net trust, seven points above the overall topline and nearly ten points above moderate Democrats (50.0%).
This finding cuts against the assumption that the Trump administration's adversarial posture toward Anthropic would translate into anti-Claude sentiment among the president's base. It did not — at least not in a way that shows up in these data. The more plausible interpretation is that MAGA Republicans' relationship with technology is driven more by a general enthusiasm for disruption and innovation than by the specific political valence of any given company's Washington conflicts. The Pentagon controversy, in other words, does not appear to have activated tribal opposition to Claude in the cohort most aligned with the administration doing the fighting.
Trump's April 21 comment that Anthropic was "shaping up" may have helped defuse any incipient backlash. But the simpler explanation may be that most Americans — even politically engaged ones — do not map their AI platform preferences onto their partisan identities the way they might map their views on, say, immigration or healthcare. AI trust, at this moment, appears to operate on a different axis.
The Liberal Republican Outlier
The standout subgroup in the entire analysis is liberal Republicans (n = 48–49), who register the highest net trust of any political group on every single platform: 85.7% for ChatGPT, 83.3% for Claude, 79.2% for DeepSeek, and 55.3% for LeChat. Their aggregate trust index of 1.06 is the highest of any subgroup in the data — higher than postgrads, higher than $150,000-plus households.
This is a small group, and the estimates carry more uncertainty as a result. But the consistency across all four platforms makes it difficult to dismiss as noise. Liberal Republicans are, almost by definition, respondents who combine an openness to institutions and expertise with a comfort in market-driven solutions — precisely the profile for whom Anthropic's positioning as a principled, safety-conscious, ad-free alternative to OpenAI would be most legible and appealing. They likely read the Pentagon break not as a liability but as a signal of institutional integrity.
The Disengaged Bottom
At the other end of the political spectrum — not ideologically, but in terms of engagement — sit respondents who identify with no political party (n = 344–347) and those who say they are "not sure" about their political identity (n = 44–46). These groups show the lowest trust across the board. The no-party group registers just 38.7% net trust in Claude and 56.8% in ChatGPT, with 28.5% having never heard of Claude. The "not sure" group is even lower on ChatGPT (53.3%) and shows the highest "never heard of" rate for ChatGPT in the entire dataset (24.4%).
The pattern suggests that political disengagement and technology disengagement travel together. These are not respondents who have evaluated AI platforms and found them wanting; they are respondents who have largely not engaged with the category at all. Their low trust is ambient rather than informed — and it represents a fundamentally different challenge than the trust gaps observed among politically active subgroups.
The Awareness Ceiling: Claude's Structural Constraint
Here is the tension that the contextual backdrop makes sharpest. Anthropic had its most visible quarter in company history — a $100 million partner network, a run-rate revenue figure that made headlines, a federal lawsuit that put the company on the front page, and a product cadence that matched OpenAI's — and yet roughly one in five survey respondents had still never heard of Claude.
The awareness gap between Claude and ChatGPT is not just a branding problem; it is a structural constraint on Claude's trust ceiling. Among respondents who have heard of Claude, trust levels are competitive with ChatGPT's in many subgroups. But the 19.4% who have never heard of Claude are disproportionately lower-education, lower-income, and politically disengaged — the mass-market segments that Anthropic's enterprise-first, safety-forward narrative has not yet reached.
The gap diagnostics make this concrete. In every subgroup where Claude's trust gap with ChatGPT is widest, Claude's "never heard of" rate is also substantially higher than ChatGPT's. Among respondents with some college or trade education, the trust gap is 20.1 points — but the awareness gap is 17.8 points. Among those identifying with no political party, the trust gap is 18.1 points and the awareness gap is also 18.1 points. The two numbers move almost in lockstep, suggesting that a meaningful portion of Claude's trust deficit is actually an awareness deficit in disguise.
This chart pairs the trust gap (red) with the awareness gap (gray) for each subgroup, sorted from largest to smallest trust gap. The close tracking between the two bars in most subgroups suggests that Claude's trust deficit is substantially explained by the fact that many Americans simply have not heard of it. Where awareness is high — among postgrads, liberal Republicans — the trust gap compresses to single digits.
The implication for Anthropic's growth trajectory is significant. If Claude's awareness were to close toward ChatGPT's level — which the company's commercial momentum suggests is plausible over time — the trust gap would likely narrow further, because the groups that already know Claude trust it at near-ChatGPT levels. But closing that awareness gap requires reaching audiences that Anthropic's current narrative — enterprise partnerships, safety research, Washington conflict — does not naturally reach: lower-income households, respondents without college degrees, the politically disengaged.
DeepSeek and the Geopolitical Shadow
DeepSeek's position in the data deserves separate attention, because it functions as a barometer for how geopolitical anxiety intersects with AI trust.
At 43.0% net trust and 34.5% net distrust, DeepSeek carries the highest distrust of any platform in the survey. Some of that distrust is informed — concerns about Chinese AI and data sovereignty were actively circulating in public discourse during the survey window — but some of it is ambient: 22.6% of respondents have never heard of DeepSeek, and among those who have, the distrust rate is elevated relative to the Western-headquartered alternatives.
The education gradient is especially steep for DeepSeek. Postgrads trust it at 66.5%, while respondents with some college or trade education register just 32.5% — a 34-point gap, the largest education-driven swing for any platform in the dataset. This is consistent with a pattern where higher-information respondents are better able to separate geopolitical concern from platform-level evaluation, while lower-information respondents default to more categorical suspicion.
The political dimension adds nuance. Liberal Republicans — the highest-trust group overall — register 79.2% net trust in DeepSeek, suggesting that their trust orientation is genuinely platform-agnostic rather than driven by national-origin preferences. Progressive Democrats, by contrast, show 49.1% net trust in DeepSeek but 38.9% net distrust — the highest distrust rate for DeepSeek among any major political subgroup, and a finding that may reflect the left's greater sensitivity to data-governance and surveillance concerns.
LeChat: The Awareness Problem in Its Purest Form
LeChat, the Mistral-backed platform, illustrates what happens when a trust question is asked about a product that most respondents have never encountered. With 32.4% of respondents saying they have never heard of it — the highest "never heard of" rate in the survey — LeChat's 36.2% net trust is difficult to interpret as a meaningful evaluation of the platform itself. It is, more accurately, a measure of how Americans respond to an unfamiliar AI brand name: with mild, diffuse skepticism.
Even here, though, the education and income gradients hold. Postgrads register 50.0% net trust in LeChat; the some-college group registers 28.5%. The pattern reinforces the broader finding that higher-information, higher-resource respondents extend more trust to AI as a category, not just to the platforms they personally use.
What This Moment Tells Us About the Trust Trajectory
Read against the backdrop of the spring of 2026, the survey data supports several conclusions that go beyond the topline numbers.
First, Anthropic's momentum is real but concentrated. The groups where Claude's trust is highest — postgrads (75.1%), liberal Republicans (83.3%), progressive Democrats (63.5%), high-income households (65.4%) — are exactly the audiences that Anthropic's enterprise-first, safety-forward, politically visible narrative was designed to reach. The data suggests the narrative worked within those segments. Claude's near-parity with ChatGPT among postgrads (75.1% vs. 81.2%) and liberal Republicans (83.3% vs. 85.7%) is a meaningful commercial signal, even if the mass-market gap remains wide.
Second, the 14.6-point overall gap reflects a familiarity deficit as much as a trust deficit. ChatGPT's 93.5% awareness versus Claude's 80.6% means the platforms are not competing on equal footing in the general population. The subgroups where Claude's trust gap is widest are also the subgroups where its awareness gap is widest — and where awareness is comparable, trust is comparable too. This suggests that Anthropic's ceiling is not primarily a trust problem; it is a reach problem.
Third, political controversy has not polarized AI trust along expected partisan lines. The most counterintuitive finding in the dataset is that MAGA Republicans are not the AI-skeptical group, and the Pentagon fight has not translated into measurable anti-Claude sentiment among the president's base. MAGA Republicans trust Claude at 59.4% — above the overall topline and above moderate Democrats. AI trust, at this moment, appears to operate on an axis defined more by technology engagement and education than by partisan identity. This is a fragile equilibrium — a single viral moment or presidential tweet could shift it — but it is the equilibrium the data describes.
Fourth, the AI trust gap is also an equity gap. The populations that trust AI the least — lower-income households, respondents without college degrees, the politically disengaged — are also the populations with the most to gain from AI-assisted tools. The aggregate trust index drops from 0.59 among $150,000-plus households to 0.14 among those earning under $50,000. This is not a finding that any single company can solve, but it is one that the industry and its regulators should take seriously. If AI's benefits accrue primarily to those who already trust it, and trust is concentrated among the already-advantaged, the technology risks widening the gaps it is often promised to close.
Fifth, the better reading of this moment is not that Claude overtook ChatGPT, but that Anthropic managed to compress product releases, enterprise growth, and Washington conflict into one compelling story of momentum — and the survey data shows that story landing precisely where it was aimed. Among the high-engagement, high-information audiences that drive enterprise adoption and shape elite discourse, Claude is a serious competitor. Among the mass market, it remains a name that one in five Americans has never heard. The next chapter of this competition will be determined by whether Anthropic can translate its concentrated trust advantage into broader awareness — or whether ChatGPT's structural lead in familiarity proves self-reinforcing.
This analysis draws on 1,479–1,513 responses to a nationally representative survey fielded in the spring of 2026 on the Relay survey platform. The survey was quota balanced on age, gender, education, income, and state. Education was grouped into four tiers (high school or less; some college/associate/trade; bachelor's; postgraduate), income into four bands (under $50k through $150k+), and political identity was constructed from party identification crossed with ideological subtype. Contextual claims about Anthropic and OpenAI product timelines, revenue figures, and web traffic data are drawn from contemporaneous reporting and are not derived from the survey instrument.

