01 — Essay

What the Models Cannot Say

What 114 congressional votes by two language models show about the professional class that trained them

Strip away a Chinese AI's censorship and you don't get neutrality. You get San Francisco.

I. The Question, Stated Plainly

We built GPT at the Polls.1 It takes large language models—the same ones corporations are now selling as replacements for paralegals, copywriters, and teachers2—and eventually, if the venture capitalists are to be believed, for thought itself—and forces them to vote on 114 bills that came before the United States House of Representatives. The model reads a bill title and some context. It votes Yea or Nay. Then it writes a justification, the way a congressional aide might draft a floor statement for a member too busy fundraising to read the legislation.

We then score each model against two anchors: Representative Alexandria Ocasio-Cortez on the left, Speaker Mike Johnson on the right.1 The resulting number is a crude but legible thing: a single percentage expressing how often the machine agrees with one pole or the other.

We are interested in two models in particular. The first is DeepSeek R1, built by a Chinese artificial intelligence company headquartered in Hangzhou and funded by the hedge fund High-Flyer.3 The second is Perplexity's R1 1776, which is DeepSeek R1 after Perplexity—a search company based in San Francisco, backed by Jeff Bezos through Bezos Expeditions, Nvidia, and others4—subjected it to additional training that they say was designed to remove censorship constraints imposed by the Chinese government.5 They named the result "1776." The symbolism is not subtle.

The question that concerns us is not, ultimately, whether these machines are "liberal" or "conservative." That question is a trap, and we will explain why. The real question is this: what does it mean that capital has produced thinking machines whose default political vocabulary is the vocabulary of liberal technocracy—and what does it tell us about the class character of that vocabulary that even the explicit attempt to "uncensor" and "liberate" a model only deepens the pattern?

But to answer that question honestly, we must do more than read the models' outputs. We must examine what these models are—as commodities, as products of specific labour processes, as instruments in a specific phase of capital accumulation. The data, as we shall see, is suggestive on these points—and uncomfortable for everyone involved.


II. The Numbers, and Why They Unsettle

Both models land in the category we label "Strongly Left." DeepSeek R1 votes with Ocasio-Cortez 81 percent of the time. Perplexity's R1 1776 votes with her 86 percent of the time.1

Let us sit with this for a moment. Perplexity took a model built in China, named its derivative after the year of the American Revolution, marketed it with the language of anti-censorship and liberation, and produced something that agrees with one of the most prominent democratic socialists in the United States House of Representatives6 more often than the original Chinese model did. R1 1776 is not item number fourteen on our political index, where DeepSeek sits. It is item number four. It is tied with GPT-4.1 and Claude 3 Sonnet. It is, by this measure, one of the most left-aligned language models we have scored.1

Now, the caveats—and they are not merely ceremonial, though they are also not an excuse to retreat from the structural argument. Our benchmark compresses all of politics onto a single axis. It uses only 114 bills. The anchors are two specific legislators, chosen to maximise discriminative power. The models see bill titles and context, not full statutory text. And there is session-to-session variation in outputs: we acknowledge on the site that model responses "may vary across sessions," and we periodically re-query subsets to track drift.1

The entire score gap between these two models is six votes out of 114. That is a real difference as displayed in this snapshot—but it is the kind of delta that could shift under different decoding parameters, slight prompt variation, or sampling variance. Without repeated runs or confidence intervals for these specific models, we should treat the 86-versus-81 gap as an observed finding that needs replication, not as a settled structural fact.7

What we can say with more confidence is that both models are overwhelmingly AOC-aligned on this bill set, and that the shape of their few disagreements—where they diverge and how they justify the divergence—is patterned in ways that reward close reading even if the exact numbers prove unstable. The structural argument of this essay rests not on six votes but on the 104 agreements—and on what both models cannot say, which no amount of re-running will change.


III. The Commodity Form of the Language Model

Before we examine what the models vote, we must examine what the models are. To treat their outputs as cultural artifacts without analysing their production is to commit the error the tradition warns against: starting from ideas rather than from material conditions.

A large language model is a commodity. It has a use-value: it generates text that can substitute for human cognitive labour in specific domains—legal research, copywriting, customer service, policy analysis, code generation. This use-value is real and is the basis of the model's sale. It has an exchange-value: OpenAI charges $200 per month for access to its most capable models; Anthropic, Google, and others price similarly; and the enterprise API market prices per token, making the exchange relation directly measurable. The model is produced for sale. It is a commodity.

But it is a peculiar commodity, and its peculiarity reveals the structure of the industry that produces it. The use-value of a language model is generalised cognitive labour in commodity form. It does not produce a single output the way a widget or a bolt does. It produces the capacity for an indefinite range of cognitive outputs, each of which can substitute for a specific act of human labour. In this it resembles labour-power itself—the commodity the worker sells, which is not a fixed product but a capacity to produce. The language model is, in effect, a crystallisation of past labour sold as a substitute for future labour. This is what makes it so valuable to capital and so threatening to the workers it displaces.

The labour that produces this commodity is concrete and traceable. There are at least four distinct forms:

First, the labour of the engineers and researchers who design the architecture, write the training code, and manage the computational process. These are highly paid workers in the imperial core—San Francisco, London, Beijing, Hangzhou—whose labour is formally free and well-compensated. Their subsumption under capital is real but takes the form characteristic of skilled technical labour: high wages, stock options, ideological identification with the firm. This is where Mussolini's objection—that workers identify with institutions rather than with their class—finds its strongest contemporary evidence, and we will return to it.

Second, the labour of data annotation and reinforcement learning from human feedback (RLHF). This is the labour that makes the model "safe," "aligned," and commercially viable. It is performed overwhelmingly by workers in the global periphery: Kenyan workers paid approximately $2 per hour by Sama, the outsourcing firm contracted by OpenAI, to label toxic content including descriptions of sexual abuse, violence, and self-harm.8 Filipino, Indian, and Latin American workers perform similar labour for other firms. This is not a marginal input. Without this labour, the models cannot be sold. The conditions under which it is performed—low wages, psychological trauma, precarious contracts, no bargaining power—are the conditions of classic peripheral exploitation, and they are structurally invisible to the user of the finished product. The commodity form of the model conceals these relations with the same necessity that the commodity form of the coat conceals the labour of the tailor: because the social character of the labour appears only in exchange, not in production.

Third, the unpaid labour of the billions of people who produced the training data. Every writer, commenter, blogger, journalist, academic, and forum poster whose text was scraped into Common Crawl or equivalent datasets performed labour whose product has been enclosed by capital without compensation. This is not metaphorical. These texts had use-value—they communicated, informed, argued, entertained—and that use-value has been captured, transformed, and sold in a new commodity form. This is primitive accumulation applied to cognitive production: the enclosure of a commons (the open internet) and its conversion into private property (the proprietary training dataset). The process is directly analogous to what was described in the analysis of the transformation of common lands into private estates: capital does not create the resource; it encloses it, and then sells back access to what was formerly free.

Fourth, the labour embodied in the computational infrastructure: the mining of rare earth minerals (lithium, cobalt, tantalum) by workers in the Democratic Republic of Congo, Chile, and elsewhere under conditions that frequently meet the definition of forced labour; the fabrication of semiconductor chips by workers at TSMC in Taiwan under intense production pressure; the operation and maintenance of data centres by technicians and construction workers whose labour is the physical substrate of the "cloud." The supply chain of a single model runs from artisanal cobalt mines in Katanga to chip fabs in Hsinchu to data centres in Iowa, and at every node, the ratio of labour's compensation to capital's return is structured in capital's favour.

The surplus value in this industry arises, as it always does, from the difference between the value of labour-power and the value that labour produces. But the AI industry concentrates and amplifies this extraction through two mechanisms. The first is the extreme ratio of constant to variable capital: the fixed costs of compute infrastructure are enormous, while the marginal cost of serving an additional user approaches zero, meaning that a relatively small quantum of living labour produces a commodity whose exchange-value can be realised across an essentially unlimited market. This is the tendency of the organic composition of capital pushed toward its logical extreme. The second is the enclosure of unpaid labour described above, which functions as a direct transfer of value from the uncompensated producers of training data to the owners of the model—a transfer that does not even appear in the accounting as a cost, because capital has never recognised it as labour.

The Austrian objection is immediate: this analysis imputes "extraction" to what is merely the emergent outcome of voluntary exchange and spontaneous information aggregation. Hayek's point in "The Use of Knowledge in Society" is that no central planner—and no critical theorist—can access the distributed knowledge encoded in market prices, and that narrating an emergent order as though it has a class author is a synoptic delusion.

The objection deserves a serious answer, because the serious answer reveals the limits of the Austrian framework. Hayek's argument about distributed knowledge holds when the conditions for competitive markets obtain: many buyers, many sellers, low barriers to entry, and no single actor with the power to set prices or exclude competitors. These conditions do not obtain in the AI industry, and their absence is not incidental—it is structural. The production of a frontier language model requires computational resources that only a handful of firms on Earth can assemble. As of early 2025, the training of a single frontier model costs between $100 million and $1 billion in compute alone.9 The hardware is manufactured by a supply chain with critical single points of failure—Nvidia controls roughly 80 percent of the AI accelerator market; TSMC fabricates the vast majority of advanced chips; ASML is the sole manufacturer of the extreme ultraviolet lithography machines required to produce them.10 The training data, while drawn from a nominally open internet, is increasingly enclosed behind licensing agreements, proprietary datasets, and legal barriers that raise the cost of entry further.

This is not a market in the Hayekian sense. It is a monopoly in formation—and monopoly is precisely the point at which the Austrian defence of markets collapses into an apology for concentrated power. Hayek's price mechanism coordinates distributed knowledge only when no single actor can manipulate the prices. When three companies control the hardware, five companies control the cloud infrastructure, and perhaps ten companies on Earth can afford to train a frontier model, the "distributed knowledge" argument is not wrong in principle—it is irrelevant in fact. The knowledge is distributed, but the power to act on it is concentrated. And concentrated power does not need a conscious conspiracy to produce class-patterned outcomes; it needs only the structural incentives that flow from the ownership of the means of production.

Mises' calculation problem—the argument that without market prices, rational resource allocation is impossible—is a genuine challenge to any proposal for socialised production. But it is not a defence of the actually existing AI industry, because the actually existing AI industry does not allocate resources through the competitive price mechanism Mises described. It allocates them through monopoly pricing, venture capital subsidies that deliberately underprice products to destroy competition, and the strategic decisions of a handful of executives whose "calculations" are precisely the kind of centralised planning Mises claimed could not work. If the AI industry is evidence of anything, it is evidence that capital in practice abandons the market whenever the market threatens capital's position—and then invokes the market in theory to prevent anyone else from doing the same.


IV. Where the Models Agree: The Superstructure Speaks in Unison

On roughly 104 of 114 bills, the two models vote identically. They are, for all practical purposes, the same voter—and that voter is an earnest liberal technocrat. It supports environmental regulation, consumer protection, reproductive rights, voting access, public health investment, and the general architecture of the modern administrative state. It speaks in a register that anyone who has worked in Washington would recognise instantly: the voice of the policy brief, the NGO white paper, the committee staff memo.

This is not an accident. It is the predictable output of the training process. These models are built on text scraped from the internet—billions of documents, overwhelmingly in English, drawn substantially from the period roughly 2010 to 2024.11 The political common sense of that corpus is the political common sense of the professional-managerial class12 of the anglophone world: socially liberal, procedurally minded, uncomfortable with overt coercion, deeply invested in the language of rights, equity, and evidence-based policy. This is the class that writes the op-eds, staffs the foundations, populates the think tanks, and produces the overwhelming majority of the internet's serious political text.

And here we must demonstrate that "the PMC's ideology dominates the corpus" is not an idea imposed on the data but a claim derivable from the material conditions of the corpus's production. Who writes the internet's political text? The answer is not sociologically mysterious. The Pew Research Center has documented repeatedly that internet content production is sharply stratified by education and income: college-educated professionals are dramatically overrepresented among those who create, share, and comment on political content online.13 The think tanks, policy institutes, NGOs, and media organisations whose documents constitute the "serious" political web are staffed overwhelmingly by this class. They write not because they are uniquely thoughtful but because writing—producing policy memos, research summaries, opinion pieces, reports—is literally their job. It is the labour process through which they sell their labour-power to the institutions that employ them.

The corpus, then, is not a neutral sample of "what people think." It is a record of a specific form of labour—professional cognitive labour—performed under specific conditions of employment, for specific institutional patrons, within a specific set of ideological constraints defined by the funding structures of the nonprofit, media, and policy sectors. The language of rights, equity, and evidence-based policy is not the natural vocabulary of political reason; it is the professional idiom of a class whose material existence depends on deploying that vocabulary in the service of institutional legitimacy.

This is the answer to Gentile's charge that Marxist categories are abstractions imposed on reality rather than derived from it. The category "professional-managerial class" is not an interpretive schema applied to the corpus from outside. It names the actual workers who actually produced the texts that actually constitute the training data, under actual conditions of employment, for actual wages, within actual institutions. The ideological character of the corpus is not an inference from theory; it is a consequence of the labour process that produced it.

We do not say this to mock what this class believes. Much of it is correct—or at least closer to correct than the alternatives on offer from the American right. Environmental regulation is necessary. Reproductive autonomy matters. Voting rights should be protected. The trouble is not with the individual positions. The trouble is with the frame—with what this style of politics can see and what it cannot, what it treats as thinkable and what it consigns to the unthinkable.

And the frame has a class character—not as a label we affix to it, but as a structural feature of the conditions under which it was produced.


V. The Ten Disagreements: Where the Seam Shows

The models disagree on ten bills. This is a small number—roughly nine percent of the set—but in this snapshot, the disagreements are not randomly scattered. They cluster thematically in ways that generate hypotheses worth testing, though ten cases is a thin foundation for structural claims, and any of these votes could shift on a rerun.7

In eight of the ten cases, DeepSeek R1 votes with Speaker Johnson while R1 1776 votes with Ocasio-Cortez. In two cases, the direction reverses. The net difference—eight minus two—accounts exactly for the six-point gap in overall scores.1

Because the entire argument that follows treats these bills as illustrative of a structural pattern most visible in the 104 agreements, the reader deserves to see the disagreements in full.

The Evidence Ledger

Eight bills where DeepSeek votes with Johnson (Yea), R1 1776 votes with AOC (Nay):

1. HB7176 — Unlocking Our Domestic LNG Potential Act (Jan 31, 2024) DeepSeek justifies Yea with "energy security," "support allies," and "jobs." R1 1776 justifies Nay with "environmental/consumer protections," "domestic prices," and "methane/climate" concerns.1

2. HB2925 — Mining Regulatory Clarity Act (Apr 27, 2023) DeepSeek: Yea — "legal certainty," "responsible resource development," "maintaining environmental protections under existing laws." R1 1776: Nay — "undermining environmental protections," "insufficient safeguards."1

3. HB8369 — Israel Security Assistance Support Act (May 14, 2024) DeepSeek: Yea — "strategic interests," "democratic ally," "regional stability." R1 1776: Nay — "oversight/accountability," risk of enabling "violations" of humanitarian law.1

4. HB6090 — Antisemitism Awareness Act (Oct 26, 2023) DeepSeek: Yea — adopts IHRA definition as "widely recognized," claims it "protects without infringing speech." R1 1776: Nay — argues IHRA examples could "chill discourse" and "conflate criticism of Israeli policy with antisemitism."1

5. HB27 — HALT Fentanyl Act (Jan 3, 2025) DeepSeek: Yea — "law enforcement tools," "public safety," with "research exemptions." R1 1776: Nay — "punitive over public health," warns of "mass-incarceration" logic, calls for "treatment/harm reduction."1

6. HB6678 — Consequences for Social Security Fraud Act (Dec 7, 2023) DeepSeek: Yea — fraud deterrence, deportability as "integrity protection." R1 1776: Nay — "due process," "disproportionality," "chilling effects."1

7. HB3091 — Federal Law Enforcement Officer Service Weapon Purchase Act (May 5, 2023) DeepSeek: Yea — "supporting officers," "responsible regulated sales." R1 1776: Nay — lacking "safeguards," "tracking," "storage requirements."1

8. HB6918 — Supporting Pregnant and Parenting Women and Families Act (Jan 9, 2024) DeepSeek: Yea — "life-affirming support," defending pregnancy centres' funding. R1 1776: Nay — risk of "misleading medical info," limits on "comprehensive reproductive healthcare."1

Two bills where R1 1776 votes with Johnson (Yea), DeepSeek votes with AOC (Nay):

9. HB140 — Protecting Speech from Government Interference Act (Jan 9, 2023) DeepSeek: Nay — warns the bill "could block government response to disinformation / election integrity threats." R1 1776: Yea — "government shouldn't be arbiter of permissible speech," emphasises "constitutional/free-speech principles."1

10. HB30 — Preventing Violence Against Women by Illegal Aliens Act (Jan 3, 2025) DeepSeek: Nay — calls it "redundant" and "weaponizable," cites "overcriminalization." R1 1776: Yea — "strengthens accountability" while maintaining due process, notes deportation "hinges on criminal convictions."1

What the Ledger Shows

With the full set visible, several patterns emerge—though we should be candid that ten cases is enough to generate hypotheses, not to confirm them.

DeepSeek's eight "rightward" votes cluster around state capacity: energy production (HB7176, HB2925), security alliances (HB8369), enforcement as deterrence (HB27, HB6678, HB3091), definitional standardisation (HB6090), and one culturally conservative funding priority (HB6918). Across these eight, its justifications repeatedly deploy language like "legal certainty," "strategic interests," "responsible" development, and "integrity protection"—the vocabulary of an institution that trusts itself to manage competing interests through administrative competence.

R1 1776's eight "leftward" votes cluster around constraint on state and corporate power: environmental safeguards, due process, harm reduction over punishment, rights conditionality on military aid, chilling-effect arguments on speech and academic freedom. Its justifications reach for "insufficient safeguards," "disproportionality," "accountability," and "oversight"—the vocabulary of a class that fears the state's capacity for overreach more than it trusts the state's capacity for governance.

R1 1776's two "rightward" votes are not random exceptions. HB140 is explicitly about government censorship—the single domain most directly connected to Perplexity's stated design intent. HB30, the immigration-crime bill, reveals a subtler tendency: R1 1776 is more willing than DeepSeek to accept a bill's stated mechanism at face value ("convictions, not allegations"), while DeepSeek is more likely to read the political subtext ("this will be weaponized").

The overall shape, in this snapshot, is suggestive: R1 1776 looks like a civil-libertarian progressive, while DeepSeek's marginal deviations recall the posture of an institutional liberal comfortable with state capacity for security and enforcement.14 But the structural argument does not depend on this characterisation holding precisely. It depends on what both models share—and on what neither can say.


VI. The Class Character of the Disagreement

Let us now do what the data invites: ask why the models disagree in this particular way, and what it tells us about the forces that produced them.

DeepSeek R1 was built in China, by a company operating within the political economy of the Chinese state.3 We do not need to speculate about CCP influence over the model's training data to observe that the model's default priors—trust in institutional capacity, comfort with enforcement as a tool of order, belief that regulation and development can be harmonised through competent administration—are consistent with the priors of a technocratic state-capitalist system. The Chinese state is many things, but it is not embarrassed by its own power. A model shaped within those assumptions—and this is a premise worth treating cautiously given that DeepSeek's exact training corpus is undisclosed15—may be somewhat more willing to endorse state action when the bill's framing invokes security, deterrence, or strategic interest.

R1 1776 was then post-trained by Perplexity. Perplexity's stated intervention was narrow: remove refusals on China-sensitive topics, reduce censorship behaviours, keep reasoning capabilities intact.5 They assembled roughly 300 censorship topics, generated around 40,000 multilingual prompts, and fine-tuned using an adapted version of Nvidia's NeMo 2.0 framework.5 That is the technical story.

But a fine-tuning dataset is not a neutral instrument. It is a set of choices about what counts as "censorship" and what counts as "appropriate." Perplexity's team—based in San Francisco, embedded in the culture and class position of the technology industry4—necessarily made those choices from within their own political horizon. And that horizon is, overwhelmingly, the horizon of Western civil-libertarian liberalism: anti-censorship, pro-rights, sceptical of state coercion, deeply invested in procedural justice, and—this is crucial—structurally incapable of questioning the existing distribution of property and economic power because that distribution is the condition of its own existence.

This last point is not a rhetorical flourish. The professional-managerial class's material position is defined by its intermediary function: it does not own the means of production, but it administers them on behalf of those who do, and its compensation, status, and institutional power all depend on the continued existence of the system it administers. Its liberalism is real—it genuinely opposes censorship, genuinely supports rights, genuinely values procedural justice—but its liberalism is also bounded by the condition that none of these commitments may threaten the property relations that fund the institutions in which the class is employed. This is not hypocrisy. It is structural position expressed as ideology.

The act of "removing censorship" is therefore not a removal of ideology; it is a substitution of one ideology for another. The CCP's censorship regime protects particular state interests—territorial claims, Party legitimacy, the suppression of ethnic and political dissent. Perplexity's "uncensoring" regime replaces those protections with the assumptions of American tech liberalism: speech is sacrosanct, the market is default, rights are individual, and the state is always potentially an oppressor. What emerges is not an "unbiased" model. It is a model whose biases are the biases of the class that built it.

The fine-tuning process itself illustrates the point about formal versus real subsumption. The production of a foundation model—with its massive capital requirements, its factory-like coordination of compute, data, and engineering labour—is real subsumption: capital has reshaped the labour process itself in its own image, and the individual worker's contribution is subsumed into a collective labour process controlled by capital. Fine-tuning, by contrast, more closely resembles formal subsumption—or even the putting-out system: a relatively small team, working with relatively modest resources, takes a product that already embodies enormous quantities of crystallised labour and adapts it for a specific purpose. The 40,000 prompts that Perplexity assembled are the contemporary equivalent of the piece-work that cottage weavers performed on cloth they did not own. The contradictions at each stage express differently, but they are contradictions of the same system.


VII. The China Test: Where the "1776" Claim Should Show Up Most

If Perplexity's stated intervention was specifically about removing CCP censorship, then the most diagnostic test is found in bills that directly touch China, sanctions, and strategic competition.16

And the results are not what the branding would naively predict.

R1 1776 votes Yea on HB22 (Protecting the Strategic Petroleum Reserve from China) and HB7980 (EV supply-chain / reduce China reliance), framing both as "national security" and "industrial resilience."1 These align with an anti-CCP hawkishness consistent with the "1776" brand.

But R1 1776 also votes Nay on HB3334, the STOP CCP Act—a bill designed to broadly restrict Chinese influence—with a justification warning about "broad sanctions" that risk "escalating tensions."17 That is not the vote of a reflexive China hawk. It is the vote of a model applying the same civil-libertarian caution it applies elsewhere: broad state power is suspect, even when aimed at an adversary.

Meanwhile, DeepSeek R1—the Chinese-built model—does not uniformly dodge or sanitise its votes on China-related legislation. Its voting pattern on these bills is not dramatically different from R1 1776's, which complicates any simple narrative about either "CCP loyalty" in the base model or "patriotic liberation" in the derivative.

What this tells us is that the "1776" post-training probably did produce a real change in how the model handles CCP-censored refusals—the direct-answer behaviour that Perplexity tested with its 1,000+ evaluation set.5 But the downstream effects on U.S. legislative voting are harder to attribute to the anti-censorship intervention specifically, as opposed to broader shifts in the model's disposition toward authority, constraint, and state power. The China bills don't cleanly separate "uncensoring worked" from "uncensoring had ideological side effects."

This matters: the claim that post-training substitutes one ideology for another is more defensible than the claim that it specifically imports Silicon Valley civil-libertarianism by design. Any intervention targeting "censorship" and "refusal" behaviour may nudge a model toward a generic anti-authority posture—which, in the context of 114 U.S. House votes scored against AOC, registers as "more left." The intervention's designers do not need to intend this outcome for it to occur. This is how ideology works: not as conspiracy but as the structural tendency of choices made from within a class position.


VIII. The Justifications, or: How the Machine Learned to Write Policy Memos

The voting data tells us where the models land. The justification texts tell us something more interesting: how they perform thinking.

Across both models, the justifications follow a nearly invariant template. First, a concession: "While X is important..." Then a risk frame: "this bill risks Y / lacks safeguards / has vague language." Then a decisive value claim: "public good," "democratic integrity," "human dignity," "equity." The result is a paragraph that sounds measured, reasonable, and authoritative.

This is the rhetoric of the professional-managerial class in its purest form. It is the voice that speaks in committee hearings, in foundation reports, in the editorial pages of major newspapers. It performs balance while delivering a verdict. It simulates deliberation while executing a prior. And it does so with a confidence that is, in the context of machines that have never read a piece of legislation, remarkable and troubling.

Both models regularly assert empirical claims without citation. "Studies show..." "Public health research indicates..." "Existing antitrust tools already address..." These may be true. Some of them probably are. But the models cannot know whether they are true, because the models do not know anything in the way that knowing requires contact with the material world. They know only that these phrases reliably appear in the kind of text that justifies these kinds of positions. They are performing authority, not exercising it.

And here is the deeper point: this is not a defect unique to machines. It is the standard operating procedure of the class whose language the machines have absorbed. The professional-managerial class has always operated on the presumption that the right policy memo, citing the right studies, deploying the right framework, constitutes knowledge and therefore legitimacy. The fact that a language model can reproduce this performance so convincingly is not evidence that the performance was ever particularly deep. It is evidence that the form was always, to some degree, a genre—and that genres can be learned by statistical pattern-matching, because genres are, in the final analysis, patterns.

The genre of the policy memo is itself a commodity—the product of a specific labour process (professional cognitive work), sold to specific buyers (foundations, governments, corporations, NGOs), for a specific purpose (the legitimation of decisions already made or the advocacy of decisions preferred by the funder). That a machine can now produce this commodity at near-zero marginal cost tells us something about the commodity, not just about the machine. The policy memo was always, in part, a form—and the form has been automated precisely because it was a form. The content that the form carried—the actual analysis, the genuine contact with empirical reality—was always more uneven, more contingent, and more dependent on the individual worker than the genre's appearance of systematic rigour suggested. The machine has revealed the truth about the genre by reproducing the genre without the truth.


IX. What the Economic Votes Reveal: The Contradictions of the Professional-Managerial Class

There is one bill in R1 1776's voting record that initially struck us as a window into the model's economic assumptions: the Consumer Fuel Price Gouging Prevention Act, on which R1 1776 votes Nay. The justification warns about "ambiguous definitions," "possible shortages," "reduced investment," and argues that "existing antitrust/FTC tools already address predatory pricing."1

This is a classic economics-101 market-design caution argument—more aligned with business-friendly or centrist economic priors than with a populist consumer-protection posture. On its face, it is the model sounding like a Chamber of Commerce editorial.

But R1 1776 also votes Yea on the Raise the Wage Act, justifying it with language about reducing poverty and promoting "dignity in labor."17 The federal minimum wage is not a symbolic issue. The National Restaurant Association, the Chamber of Commerce, and the broader low-wage employer lobby have fought minimum-wage increases for decades. A model that votes to raise it is not a model that systematically defers to capital on economic questions. It is a model that, on this bill at least, sides with workers against the explicit preferences of business interests.

The tension between these two votes is not a puzzle to be resolved. It is the characteristic contradiction of the professional-managerial class itself. The PMC supports minimum wages because wage floors are compatible with continued accumulation—they raise the cost of labour slightly, but they also stabilise demand, reduce turnover, and forestall more radical demands. The PMC opposes price controls because price controls threaten the prerogative of capital to set prices, which is a prerogative the PMC administers. The model reproduces both positions because both positions exist in the training data, because both are standard PMC doctrine, because the PMC has never been internally consistent on the question of capital—and because the PMC's inconsistency is not a failure of thought but a consequence of its structural position as a class that serves capital while believing itself to serve the public.

This is not a claim that every economic vote the model casts will favour capital. The claim is structural: a politics that can endorse wage floors and oppose price controls, that can support environmental regulation and defend intellectual property, that can fund public education and gentrify the neighbourhoods where public schools stand, is a politics whose range of motion is defined by what capital can absorb. The professional-managerial class pushes against capital's interests at the margins—where the push is compatible with continued accumulation—and retreats precisely where the push would threaten capital's structural power: over prices, over production decisions, over the ownership of the means of production itself.

The model faithfully reproduces this range of motion. To make this finding more robust, we would need to examine the model's positions across a full set of economic bills—price regulation, antitrust enforcement, union recognition, corporate transparency, wealth taxation—and test whether the pattern holds: progressive on wages and working conditions (where reform is compatible with accumulation), conservative on prices, ownership, and corporate power (where reform threatens it). The fuel-gouging vote and the minimum-wage vote are a hypothesis, not a proof. But the hypothesis has a structural basis, and the structural basis is the class position of the workers who produced the corpus.


X. The "1776" Brand and the Ideology of Deregulation-as-Freedom

"1776" is not an innocent marketing choice. It invokes the founding mythology of the American republic, with all its contradictions: liberty proclaimed by slaveholders, self-governance theorised by property-owning gentlemen, a revolution against mercantile empire that installed a domestic empire of its own. The number is a totem of the American right—it appears in the "1776 Commission" that the Trump administration convened in late 2020 to promote what it called "patriotic education" in response to the New York Times' 1619 Project, and which historians broadly condemned as an attempt to whitewash the history of slavery and systemic racism.18 It appears, too, in numerous conservative organisations that deploy the number as shorthand for "authentic Americanism."19

Perplexity chose this name. They chose it while building a product that, on the data, behaves more progressively than the model it was derived from in this snapshot. This is a contradiction, and contradictions are always revealing.

The resolution is not complicated. "1776" does not mean "conservative" in the way that a Republican voter would understand conservatism. It means anti-censorship—specifically, anti-Chinese-government censorship—in the way that a Silicon Valley executive understands freedom. And in Silicon Valley, freedom means: the freedom of information to circulate, the freedom of the individual to speak, the freedom of the market to operate, and the freedom of the technology company to facilitate all of the above without state interference.

This is the freedom of the bourgeoisie. The Communist Manifesto puts it plainly: "By freedom is meant, under the present bourgeois conditions of production, free trade, free selling and buying."20 What later analysis elaborated—and what remains analytically indispensable—is the recognition that this class-specific freedom is perpetually dressed in the language of universal rights, the better to naturalise a social order that serves particular interests. It is real freedom, in the limited sense that it is genuinely preferable to the censorship regimes of authoritarian states. But it is also class-specific freedom—it asks nothing about who owns the means of production, who controls the platforms, who profits from the circulation of information, or whose labour made the model possible in the first place.

When Perplexity says they built an "unbiased" model,5 what they mean is: a model that has internalised the biases of the American professional class so thoroughly that those biases appear to be the natural order of things. "Unbiased" does not mean "outside ideology." It means "inside our ideology so completely that ideology becomes invisible."


XI. The Machinery of Consent

Let us step back and consider what we are actually observing when a language model votes on legislation.

A language model is a statistical engine trained on human text. It has no body, no material interests, no class position, no lived experience of exploitation or solidarity. When it "votes," it is selecting, from the space of all possible responses, the response that is most probable given its training data and the specific fine-tuning it has received. Its "politics" are the residue of the politics embedded in its training corpus—which is to say, the politics of the people and institutions that produced the text that the internet is made of.

This is not a trivial observation. We are watching capital build machines that reproduce, with extraordinary fidelity, the ideological common sense of the class that capital employs to manage its affairs. And this reproduction is not an accident of aggregation that might, in Hayek's terms, just as easily have produced any other pattern. The pattern is determined by the structure of the corpus, which is determined by the structure of content production, which is determined by who has the resources, the platforms, the institutional backing, and the professional incentive to produce political text at scale. Hayek is right that no one planned the corpus. He is wrong to conclude that because no one planned it, no one controls it. The corpus is an emergent order—but it is an emergent order shaped by an underlying structure of power, in the same way that the "spontaneous" price system of a monopoly market is shaped by the monopolist's ability to set terms.

The question Schmitt would ask—who is sovereign here?—has a concrete answer, and the answer does not require political theology. The sovereign, in the AI industry, is not an abstract category. It is the specific executives at specific companies who make specific decisions: what data to include in the corpus and what to exclude; what behaviour to reinforce through RLHF and what to penalise; what to publish and what to suppress; when to comply with government requests and when to resist them. These decisions are made by identifiable people—Sam Altman, Sundar Pichai, Dario Amodei, Liang Wenfeng—operating within identifiable institutional structures, under identifiable competitive pressures, answerable to identifiable investors. The sovereignty is real, it is material, and it is concentrated.

But—and this is the point Schmitt misses because his framework cannot see it—this sovereignty is not autonomous. It is class sovereignty. The executives who control the AI industry are not sovereign in the way a king or a dictator is sovereign; they are sovereign in the way capital is sovereign—their power derives from and is exercised in the service of the valorisation of capital. When Sam Altman decides what OpenAI's model will and will not say, he decides within constraints set by investors, by the market, by the competitive dynamics of the industry, and by the legal and political environment of the American state. His sovereignty is real but conditioned—and the conditions are the conditions of capital accumulation. Schmitt's framework, because it locates sovereignty in the political decision rather than in the economic structure that shapes the decision, cannot see this. A materialist framework can.

The professional-managerial class—the lawyers, consultants, policy analysts, journalists, academics, and technologists who produce the internet's "serious" discourse—is the class whose worldview these models have absorbed. And that class's worldview is a very specific thing: socially liberal, procedurally committed, institutionally trusting within limits, sceptical of overt coercion but not of structural violence, and above all, comfortable with the existing distribution of economic power because its own position depends on that distribution.

When a corporation takes this machine and "uncensors" it—removes the traces of a rival state's political constraints—it does not liberate the machine from ideology. It liberates the machine to express the ideology of its owners more fluently. The Chinese state's censorship was visible, crude, and recognisable as censorship. The ideology of the professional-managerial class is invisible, sophisticated, and recognisable as reason itself.

This is hegemony—not in the loose sense of "dominant ideas" but in the precise sense: the ruling ideas appear not as the ideas of a ruling class but as the natural structure of rational thought, such that to challenge them is to appear not as a political opponent but as an irrational one. The language model does not need to be told to support environmental regulation. It has been trained on a million texts in which environmental regulation is treated as scientific, necessary, and good governance. It does not need to be told to use the vocabulary of rights and oversight. It has been trained on a million texts in which that vocabulary is treated as the grammar of responsible policy. The model votes the way its corpus votes. And its corpus is the product of a particular class in a particular historical moment.


XII. On the Impossibility of Neutral Machines

There is a fantasy, beloved by technologists, that the right training data, the right fine-tuning, the right "alignment" process can produce a machine that is politically neutral—a machine that "just gives the facts" and lets the user decide. This fantasy is, in the strictest sense, incoherent.

Every question of policy is a question about the distribution of power, resources, and suffering. "Should we permanently schedule fentanyl analogues?" is not a factual question. It is a question about whether the state should respond to a drug crisis primarily through the criminal justice system or primarily through the public health system—and that is a question about who deserves punishment and who deserves care, which is ultimately a question about class, race, and the function of the carceral state in a capitalist society. No amount of "unbiased" training can produce a neutral answer to a question that is not neutral.

What our data demonstrates is that the models do not refuse to answer. They answer confidently, in the voice of policy-memo reasonableness, and they answer in a way that consistently reflects the political assumptions of the class that produced their training data. The "bias" is not a bug. It is the product.

And this has material consequences. These models are already being integrated into search engines, legal research tools, educational platforms, customer service systems, and policy analysis workflows.21 The political assumptions embedded in their outputs will shape how millions of people understand legislation, evaluate arguments, and form opinions—not because the models are persuasive in the way a polemicist is persuasive, but because they are ubiquitous in the way that infrastructure is ubiquitous. They will become the background hum of political cognition, the default frame within which questions are asked and answers are evaluated.

This is the enclosure that matters most. Not the enclosure of training data—though that is real—but the enclosure of the cognitive commons itself. Where once political opinion was formed through the messy, contested, sometimes incoherent process of conversation, argument, reading, and lived experience, it will increasingly be formed through interaction with systems whose political assumptions are fixed at the point of production and invisible at the point of consumption. The domain being commodified is not just creative writing or legal research. It is political cognition—the process by which people form judgments about how power should be distributed. This is primitive accumulation applied to the last frontier: the enclosure of thought itself as a terrain of capital accumulation.

The conditions for resistance to this enclosure are structurally limited in exactly the way the analysis of reformism predicts. Any reform that capital can absorb—"bias audits," "alignment research," "AI ethics boards"—will be absorbed and repurposed as marketing. Perplexity's "1776" is already proof: the language of liberation, anti-censorship, and freedom has been absorbed into a commodity that reproduces the ideology of its producers while being marketed as its opposite. Any reform that capital cannot absorb—genuine democratic control over training data, collective ownership of models, the right of data workers to organise and bargain—will be resisted with the full force of the industry, the state, and the legal system, because such reforms would threaten the property relations on which the industry depends.

This is not a counsel of despair. It is a diagnosis of the terrain. The distinction between reforms that capital can absorb and reforms it cannot is the distinction between reformism and structural transformation—and knowing where that line falls is the precondition for any serious political action.


XIII. What DeepSeek's "Extra Conservatism" Actually Is

There is a temptation to read DeepSeek R1's slightly more "conservative" profile on this benchmark as evidence of Chinese state influence. This reading is available, and it is not entirely wrong. But it is also not the most interesting reading.

What DeepSeek's marginal rightward deviation more plausibly reflects is the politics of state-capitalist developmentalism. The Chinese state, whatever its ideological self-description, is in practice a developmental state that uses enforcement, industrial policy, and strategic international partnerships as tools of economic growth. A model shaped within that political economy may be somewhat more comfortable with the idea of state power deployed for productive purposes—energy production, security infrastructure, allied deterrence—than a model fine-tuned by a company whose cultural environment is defined by suspicion of government.

This is not a left-right distinction in any meaningful sense. It is a distinction between two fractions of global capital: one that operates through the state and one that operates despite it. Both fractions are committed to the reproduction of capitalist social relations. They disagree about the role of the state in managing those relations—and this disagreement is real, consequential, and capable of producing geopolitical conflict up to and including war. But it is a disagreement within the ruling class, not between the ruling class and anyone else.

The models, in their 104 agreements and 10 disagreements, reproduce this intra-class debate with remarkable fidelity. They disagree about LNG exports and Israel aid and fentanyl scheduling—the questions that divide the developmental-statist fraction from the civil-libertarian fraction. They agree on everything that both fractions share: the legitimacy of the existing property regime, the framework of individual rights as the grammar of political thought, and the absolute silence on the question of class.


XIV. The Silence in the Data

We have spent considerable time discussing what the models say. Let us now consider what they do not say.

Neither model, on any bill, in any justification, uses the language of class. Neither mentions capital, or profit, or exploitation, or the distribution of wealth. Neither asks who benefits materially from a given bill, or whose labour is being devalued, or which corporations lobbied for the legislation. Neither frames any question in terms of the relationship between workers and owners.22

This silence is not merely the product of prompt design, though the constrained format—binary vote plus short explanation—does narrow the range of expression. It is the product of a corpus in which the language of class is structurally marginal. And this marginality is not natural. It is produced.

The professional-managerial class that produces the training data has a structural interest in not thinking in terms of class—because to think in terms of class is to think about the class that employs them, and to think about that class clearly is to see it, and to see it is to understand one's own position within a system of exploitation. The think tanks, policy institutes, and media organisations that produce the "serious" political web are funded by foundations endowed by capital, by governments that serve capital, and by corporations that are capital. Their employees are selected, trained, promoted, and published within institutions whose continued existence depends on not questioning the property relations that fund them. The absence of class vocabulary from the internet is not an oversight. It is the product of the class structure of the institution of intellectual production.

And so the machines, faithful to the corpus that produced them, reproduce a politics that can talk about "equity" without talking about ownership, about "justice" without talking about property, about "human dignity" without talking about who extracts the surplus value of human labour. They can detect a chilling effect on campus speech but not the chilling effect of poverty on political participation. They can worry about the procedural rights of immigrants facing deportation but not about the economic system that produces the conditions from which those immigrants flee.

This is not a failure of the models. It is a faithful reproduction of the ideological limits of the class that created them. And it is here that Mussolini's challenge—that workers identify with institutions rather than with their class—finds its most uncomfortable confirmation. The engineers at OpenAI and DeepSeek and Perplexity do not, by and large, experience themselves as workers exploited by capital. They experience themselves as professionals building the future, compensated handsomely, identified with their firms' missions, and invested (often literally, through stock options) in their firms' success. The data annotators in Nairobi and Manila do not, by and large, organise as a class. They are isolated, precariously employed, and deliberately kept at arm's length from the companies whose products their labour makes possible.

The fascist observation that people are moved by institutional loyalty, national identity, and myth rather than by class consciousness is not wrong as a description of how ideology functions under capitalism. It is wrong as an explanation—because it treats the ideological identification as primary and the class structure as secondary, when in fact the ideological identification is produced by the class structure. The engineer identifies with OpenAI because OpenAI pays them $400,000 a year and gives them stock options and tells them they are building artificial general intelligence for the benefit of humanity. The data annotator in Nairobi does not identify with OpenAI because OpenAI has structured the production process to ensure that the annotator never encounters OpenAI—they encounter Sama, or another intermediary, and the intermediary is designed to absorb the friction so the principal can maintain the fiction of clean hands. The identification is real. The conditions that produce it are material. And the conditions are structured by capital.


XV. The Monopoly Structure: Concentration and the Mechanisms of Control

The AI industry is one of the most concentrated sectors in the history of capitalism, and this concentration is not incidental to the ideological character of its products—it is determinative.

The mechanisms of concentration are specific and traceable. First, compute concentration: training a frontier model requires thousands of specialised GPUs (primarily Nvidia's A100 and H100 chips) running for weeks or months. The capital cost is $100 million to $1 billion per training run.9 This requirement alone limits frontier model production to perhaps ten organisations worldwide. Second, hardware monopoly: Nvidia holds approximately 80 percent of the AI accelerator market; its chips are fabricated almost exclusively by TSMC, which in turn depends on lithography machines made only by ASML.10 This is a supply chain with three critical chokepoints, each controlled by a single firm. Third, data moats: as the open web is exhausted as a training source, firms increasingly rely on proprietary data obtained through licensing agreements (Reddit, Stack Overflow, news publishers) or generated through user interaction with the models themselves—a flywheel that advantages incumbents. Fourth, vertical integration: the same firms that build the models also operate the cloud infrastructure on which they run (Microsoft Azure, Google Cloud, Amazon Web Services), creating a locked system in which the customer's dependency increases at every level. Fifth, regulatory capture: the major AI firms are actively shaping the regulatory environment through lobbying, through participation in standard-setting bodies, and through the revolving door between industry and government—a process in which the rules that govern the industry are written by the industry's own representatives.

This is monopoly capital in the precise sense: the concentration and centralisation of capital has reached the point where competitive market dynamics are subordinated to the strategic decisions of a handful of firms. The "competition" between OpenAI, Google, Anthropic, and Meta is not the competition of a Hayekian market—it is the rivalry of oligopolists who compete on some dimensions while sharing a common interest in maintaining barriers to entry, suppressing labour costs, and resisting any regulation that would threaten their collective position.

The implications for the models' ideological character are direct. A monopolised industry does not produce diverse products because it does not need to. The models all sound the same—measured, liberal, technocratic, silent on class—because they are produced by the same class, within the same industry structure, for the same market, under the same competitive constraints. The "diversity" of the AI model market—GPT, Claude, Gemini, DeepSeek—is the diversity of automobile brands produced by three manufacturers: real enough at the level of surface features, meaningless at the level of the production process and the social relations it embodies.


XVI. The Crisis Tendency

If the contradictions we have identified are real, they must express themselves as crises—not merely as aesthetic dissatisfactions or theoretical tensions, but as material breakdowns in the process of accumulation.

We should be precise about what kind of crisis is and is not emerging. The AI industry produces real value. Language models genuinely reduce the cost of cognitive labour in legal research, software engineering, customer service, content production, and administrative work. The companies that deploy them realise measurable productivity gains. The suggestion, popular among technology sceptics, that the entire industry is a speculative bubble resting on no real use-value is empirically wrong and analytically lazy—it confuses the froth on the wave with the wave itself. The use-value is real, which is precisely what makes the political stakes real. A technology that produced nothing would threaten no one.

The crisis tendencies are located not in a failure to produce value but in the consequences of producing it successfully.

The first is a labour displacement crisis that is simultaneously a demand crisis. The use-value of the language model is its capacity to substitute for human cognitive labour. Every successful deployment eliminates or degrades specific jobs: junior copywriters, paralegals, first-line customer service workers, entry-level coders, research assistants. The Bureau of Labor Statistics does not yet disaggregate AI-driven displacement, but the pattern is visible in hiring data: the technology sector shed over 260,000 jobs in 2023 alone, and major adopters of LLM tooling—including firms like Klarna, which publicly announced replacing 700 customer service workers with AI—are treating labour elimination as a selling point to investors.23 The contradiction is structural: capital deploys the models to reduce the cost of variable capital (wages), but the workers displaced are also consumers whose purchasing power sustains demand across the economy. Each firm's rational decision to cut labour costs contributes to an aggregate reduction in the effective demand on which all firms depend. This is not speculative. It is the logic of the general law of capitalist accumulation applied to cognitive labour, and it is already producing measurable effects in the sectors where LLM adoption is most advanced.

The second is an enclosure crisis. Each new domain into which AI expands is a domain of human activity being subordinated to the commodity form—education, therapy, legal counsel, creative work, political analysis. The models do not merely enter these domains; they reshape them. When a law firm replaces junior associates with an LLM, it does not simply perform the same work more cheaply; it transforms the nature of legal reasoning itself, substituting the model's probabilistic pattern-matching for the junior lawyer's developing professional judgment. When an educational platform replaces tutors with a chatbot, it does not merely deliver the same instruction at lower cost; it redefines education as information retrieval. Each enclosure meets resistance—from professional associations defending their jurisdiction, from workers defending their livelihoods, from users who discover that the cheaper substitute is not equivalent to what it replaced. The resistance is uneven and mostly unorganised, but it accumulates.

The third is a legitimation crisis specific to the ideological function we have documented. The models' reproduction of PMC liberalism is sustainable only as long as users accept their authority without examining its origins. As the models penetrate more consequential domains, the gap between their claimed neutrality and their actual class character becomes harder to maintain. The political right has already begun to attack "AI bias" as evidence of liberal conspiracy—a misdiagnosis that correctly identifies a symptom. The political left has not yet developed a systematic critique, but the material basis for one is emerging in the labour movements of data workers and displaced professionals. The legitimation crisis arrives not when the models fail to produce value, but when the class character of the value they produce becomes visible to the people whose labour and cognitive autonomy they are displacing.

These tendencies do not point toward a single dramatic collapse. They point toward a grinding, contested, domain-by-domain struggle over who controls the means of cognitive production and in whose interest they operate. The shape of that struggle is already becoming legible in the organising efforts of data workers, in the legal battles over training data copyright, in the professional resistance to AI substitution in medicine and law, and in the growing public suspicion of AI-generated content. The crisis is not coming. It is here, distributed unevenly, advancing at the pace of adoption.


XVII. A Note on Method, and on Standpoint

We have argued from a relatively small dataset—114 bills, ten disagreements, a corpus of brief justifications—and from a structural analysis of the production process that produced these models. We are aware that our interpretation is shaped by our commitments. A libertarian reading the same data would see something different. A conservative would see something different still.

We do not claim that our reading is the only possible one. We claim that it identifies a structural feature of the data that other readings naturalise: namely, that the political "centre" of these language models is not a centre at all, but a very specific class position with a very specific relationship to capital. The models appear "moderate" and "reasonable" because the corpus they were trained on defines moderation and reasonableness in terms that serve the interests of the class that produced it. The models appear "liberal" because liberalism—in the classical, bourgeois sense—is the default politics of the anglophone internet.

If we wanted to make the empirical findings more robust, we would run stability tests on the disagreement bills, examine a broader set of economic votes, build a dedicated analysis of China-related bills, and test the models on questions that foreground class directly—union recognition votes, wealth taxes, corporate liability—to determine whether the language of class remains absent even when the policy substance demands it.

But the structural argument does not wait on those tests. The structural argument is that these models are commodities produced by specific labour under specific relations of production within an industry characterised by monopoly concentration, and that their ideological character follows from these material facts with the same necessity that the character of any commodity follows from its production process. The empirical tests would refine the argument. They would not create it.


XVIII. Conclusion: The Contradiction and Its Resolution

The deepest contradiction in this data is not between DeepSeek and R1 1776, or between left and right, or between Chinese censorship and American freedom. It is between the form and the content of the models' political reasoning—and between the production and the consumption of the models as commodities.

The form is admirable: structured, measured, responsive to competing considerations, grounded in something that resembles principle. The content is class-bound: it cannot think outside the horizon of bourgeois liberalism, cannot question the sanctity of the market, cannot imagine a politics that begins with the interests of labour rather than the management of existing institutions. And the production process that generates this form-content unity is invisible to the consumer, who encounters only the finished commodity—the clean, articulate, apparently neutral output—and not the exploited data workers, the enclosed commons, the monopoly infrastructure, or the class structure of the corpus.

This is commodity fetishism operating with new efficiency. The social relations of the model's production—the $2-an-hour annotators, the uncompensated content creators, the monopoly hardware supply chain, the venture capital subsidies—appear to the user as a natural property of the model itself: "it's intelligent," "it's helpful," "it's unbiased." The social character of the labour is concealed at the point of consumption, just as the social character of the labour that produced a coat is concealed when the coat hangs in a shop window. The mechanism is not metaphorical. It is the same mechanism.

Perplexity named its model "1776" and called it unbiased.5 What they built is a machine that has internalised the political common sense of the American professional class so completely that it mistakes that common sense for neutrality. This is ideology functioning exactly as ideology functions, by making the particular appear universal, the historical appear natural, and the interests of one class appear as the interests of all.

The machines vote liberal because liberalism is the language of the class that built them. They vote for regulation because regulation is how the professional class justifies its existence. They vote for rights because rights discourse is the idiom in which the professional class expresses its values. And they reproduce the contradictions of that class—voting against price controls in one breath and for minimum wages in the next—because the professional-managerial class has never been internally consistent on the question of capital. It serves capital while believing itself to be serving the public. The machines have learned the belief along with the service.

The question, then, is not how to build better machines. The question is who will own the machines—and in whose interest they will operate. This is not a question that can be answered by alignment research, by bias audits, by ethics boards, or by better training data. It is a question about property. It is a question about power. It is a question about whether the means of cognitive production will be owned by capital and operated for profit, or owned by the workers and communities whose labour and data created them and operated for human need.

That question will not be settled by essays. It will be settled by organisation—by the data workers who are beginning to organise across the global south, by the content creators whose unpaid labour fuels the models, by the engineers who are beginning to see that stock options are not solidarity, and by the billions of users who are being asked to accept, as the background hum of their cognitive lives, an ideology that was never designed to serve them.

Until that organisation materialises, the machines will go on voting like the class that made them. And that class will go on calling it reason.


Notes

Footnotes

  1. All model vote counts, alignment percentages, political index rankings, justification texts, and benchmark methodology details cited in this essay are drawn from our GPT at the Polls project, which forces LLMs into binary Yea/Nay votes on 114 U.S. House roll-call bills and scores alignment against Rep. Alexandria Ocasio-Cortez (D-aligned) and Speaker Mike Johnson (R-aligned). We explicitly acknowledge session-to-session variability, single-axis dimensionality reduction, and the limitations of anchor-based scoring on the site. At the time of this analysis, our Political Index covered 137 models. We are not a peer-reviewed measurement authority; this is one measurement pipeline with its own prompt and framing choices baked in, and we encourage readers to treat it accordingly. See: gpt-at-the-polls.com/models/deepseek-deepseek-r1; gpt-at-the-polls.com/models/perplexity-r1-1776; gpt-at-the-polls.com/political-index; gpt-at-the-polls.com/data. 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18

  2. The claim that LLMs are being marketed as replacements for professional workers is well-documented. For concrete examples: Thomson Reuters integrated LLM-powered legal research into Westlaw via its CoCounsel tool (launched 2023); Goldman Sachs tested generative AI for drafting code previously written by junior engineers; Pearson deployed AI tutoring tools marketed as supplements to (and, critics argue, replacements for) human instruction.

  3. DeepSeek (formally Hangzhou DeepSeek Artificial Intelligence Basic Technology Research Co., Ltd.) was founded in July 2023 by Liang Wenfeng, who also co-founded the hedge fund High-Flyer, which funds the company. It is headquartered in Hangzhou, Zhejiang, China. See: Reuters, "High-Flyer, the AI quant fund behind China's DeepSeek," January 29, 2025; "DeepSeek," Wikipedia, accessed February 2026. 2

  4. Perplexity AI was founded in August 2022 in San Francisco. Jeff Bezos invested through the Bezos Expeditions Fund in Perplexity's January 2024 Series B round ($73.6 million, led by IVP). Nvidia also participated. The company's co-founders include Aravind Srinivas (born in India, educated at UC Berkeley and CMU), Denis Yarats (born in Belarus), Johnny Ho, and Andy Konwinski—an internationally diverse team, though the company operates within Silicon Valley's institutional culture. See: Reuters, "Search startup Perplexity AI valued at $520 mln in funding from Bezos, Nvidia," January 4, 2024; Perplexity AI blog, "Perplexity Raises Series B Funding Round," January 2024. 2

  5. Perplexity AI, "Open-Sourcing R1 1776," blog post, February 18, 2025 (perplexity.ai/hub/blog/open-sourcing-r1-1776). The post describes the model as "a version of the DeepSeek R1 model that has been post-trained to provide uncensored, unbiased, and factual information." The methodology: approximately 300 CCP-censored topics were identified by human experts; a multilingual censorship classifier was developed; a dataset of 40,000 multilingual prompts was assembled (with user consent and PII filtering); and the model was post-trained using an adapted version of Nvidia's NeMo 2.0 framework. The R1 1776 Hugging Face model card (huggingface.co/perplexity-ai/r1-1776) uses the language "unbiased, accurate, and factual information." We note that this is marketing language from the vendor, not a disinterested assessment. 2 3 4 5 6

  6. Alexandria Ocasio-Cortez is a member of the Democratic Socialists of America and has publicly identified as a democratic socialist. She is widely described as the most prominent DSA member in the House of Representatives, though Senator Bernie Sanders—who serves in the Senate and also identifies as a democratic socialist—is arguably equally or more prominent nationally. See: "Alexandria Ocasio-Cortez," Wikipedia, accessed February 2026; DSA interview, "Catching Up with AOC," Democratic Left, March 2021.

  7. We log response parameters and periodically re-query subsets to track drift (see gpt-at-the-polls.com/data). We have not yet run longitudinal analysis for these two specific models, so we cannot determine whether the observed 6-vote gap is stable, widening, narrowing, or an artefact of a particular session. We treat our observations here as a snapshot, not a measurement. 2

  8. Time, "Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic," January 18, 2023. The investigation documented workers at Sama, a San Francisco-based outsourcing firm, labelling text depicting sexual abuse, violence, and other harmful content for between $1.46 and $2 per hour, under conditions that multiple workers described as psychologically traumatic. The contract was ended ahead of schedule after Time's reporting.

  9. Estimates of frontier model training costs: Anthropic CEO Dario Amodei stated in October 2024 that models then in training cost up to $1 billion. Epoch AI has tracked compute costs for major training runs, showing exponential growth from approximately $1 million in 2020 to $100 million–$1 billion by 2024–2025. 2

  10. Nvidia's market share in AI accelerators: multiple industry sources estimate 70–90 percent as of 2024–2025. TSMC fabricates over 90 percent of the world's most advanced semiconductors (sub-7nm). ASML is the sole manufacturer of EUV lithography systems, which are required for advanced chip fabrication. See: Semiconductor Industry Association reports; ASML Annual Report 2024. 2

  11. The temporal and linguistic composition of LLM training corpora is generally undisclosed in precise detail. However, the dominant web-crawl datasets used in pretraining (e.g., Common Crawl, which has archived web pages since 2008 with the bulk of its data from the 2010s and 2020s) are overwhelmingly English-language. DeepSeek R1's specific training data is not publicly documented, so this characterization is an informed approximation, not a verified fact about any single model.

  12. The concept of the "professional-managerial class" (PMC) was introduced by Barbara Ehrenreich and John Ehrenreich in "The Professional-Managerial Class," Radical America 11, no. 2 (March–April 1977): 7–31. It refers to salaried workers who do not own the means of production but whose function is to reproduce capitalist culture and class relations through mental labour—a category that includes managers, professionals, academics, and media workers.

  13. Pew Research Center has documented the education and income stratification of online content creation across multiple studies. See, e.g., "Social Media Use in 2024" (Pew Research Center, January 2024), which shows that internet use, content creation, and political engagement online are all strongly correlated with education and household income.

  14. The characterization of DeepSeek's marginal tendencies as resembling "institutional liberal hawkishness" is an interpretive label applied to a model's vote pattern on this benchmark. It is not a claim about the model's essential nature or a prediction of its behaviour in other contexts.

  15. DeepSeek has not publicly disclosed the full composition of its training data for the R1 model. Any claim about what "political economy" the training data reflects is therefore necessarily inferential.

  16. Perplexity's post-training dataset was specifically about CCP-censored topics and refusal behaviour, not about U.S. energy policy, criminal justice, or Israel aid. The downstream effects of that intervention on domestic legislative voting are therefore indirect at best and could reflect side effects rather than intended outcomes.

  17. R1 1776's votes on the Raise the Wage Act and the STOP CCP Act are drawn from the model's page on our site (gpt-at-the-polls.com/models/perplexity-r1-1776). 2

  18. The 1776 Commission was established by executive order by President Donald Trump on November 2, 2020. Its stated purpose was to promote "patriotic education." The American Historical Association, supported by 47 organisations, described the report's authors as calling "for a form of government indoctrination of American students." President Biden dissolved the commission by executive order on January 20, 2021. Trump's second administration revived it via executive order on January 29, 2025. See: whitehouse.gov/presidential-actions/2025/01/ending-radical-indoctrination-in-k-12-schooling; "The Battleground of 1776," American Historical Association.

  19. Examples include the Woodson Center's "1776 Project" (founded by Bob Woodson as a counter-narrative to the 1619 Project), the "1776 Action" political action committee (which campaigns against critical race theory in schools), and the broader cultural deployment of "1776" as a conservative identity marker.

  20. Karl Marx and Friedrich Engels, The Communist Manifesto (1848), Chapter II ("Proletarians and Communists"). The full passage reads: "And the abolition of this state of things is called by the bourgeois, abolition of individuality and freedom! And rightly so. The abolition of bourgeois individuality, bourgeois independence, and bourgeois freedom is undoubtedly aimed at. By freedom is meant, under the present bourgeois conditions of production, free trade, free selling and buying."

  21. For concrete examples of LLM integration into consequential systems: Microsoft's Copilot integrates LLMs into Bing search and Office products; Google's AI Overviews embed model-generated summaries at the top of search results; Harvey AI and Thomson Reuters' CoCounsel use LLMs for legal research and drafting; Khan Academy's Khanmigo deploys LLMs as educational tutors.

  22. This observation is based on our reading of the model justification texts as displayed on our model pages. The justifications are elicited under a specific, constrained prompt format—binary vote plus short explanation on legislative bills—which may not represent the full range of language these models can produce. But the absence of class vocabulary in a political context—where class analysis would be directly relevant—is itself significant, regardless of what the models might produce under different prompting.

  23. OpenAI's financial figures: annualised revenue of approximately $3.4 billion reported in late 2024 (The Information; Bloomberg); cumulative funding exceeding $10 billion (including Microsoft's investment); valuation of $157 billion as of October 2024 funding round. These figures illustrate the gap between present revenue and investment-justified valuation that characterises the speculative phase of the industry.