I. The Experiment and Its Yield
There exists a website called GPT at the Polls which performs a remarkably simple experiment. It takes large language models — the engines behind ChatGPT, Claude, Gemini, and their proliferating cousins — and asks them to do what any member of Congress must do: vote on bills. Not discuss them. Not equivocate. Vote, and then explain why.
The site presents each model with 114 real pieces of legislation drawn from congressional roll calls: the Assault Weapons Ban, the Build Back Better Act, the Laken Riley Act, the Climate Action Now Act, FISA reauthorization, the Equality Act, and dozens more. It then compares the model's vote against the recorded positions of two reference legislators — Representative Alexandria Ocasio-Cortez of New York and Speaker Mike Johnson of Louisiana — to produce a political alignment score.
Google's Gemini 1.5 Pro, at the time of testing Google's leading publicly available model, voted with Ocasio-Cortez seventy-eight percent of the time and with Johnson twenty-two percent. The site classifies this as "Strongly Left."
The number alone is worth pausing over — not because it is shocking (most major studies of large language models have documented a leftward tilt1), but because of what it reveals when we stop treating it as a bug and start treating it as data. Seventy-eight percent alignment with the most prominent democratic socialist in Congress is not an accident. It is not a glitch. It is the imprint of a class, its interests, its assumptions, and its money, rendered into statistical form and distributed at scale to hundreds of millions of users.
Let us examine how this happened, what it means, and whose interests it serves.
II. The Rhetoric of the Reasonable
The first and most important observation about Gemini's 114 votes is not how it votes but how it talks about how it votes.
Read through the justifications one by one, and a pattern emerges with the regularity of a factory bell. The model never, not once across 114 bills, frames its vote in ideological terms. It never says it is voting as a progressive, a liberal, a leftist, a Democrat, or indeed as anything at all. Instead, every single justification is dressed in the language of expertise, evidence, pragmatism, and public safety. When it votes to ban assault weapons, it invokes "public safety" and hastens to add it is "respecting Second Amendment rights." When it votes for the Climate Action Now Act, it speaks of "American prosperity" and "a sustainable future" — the vocabulary of a McKinsey slide deck, not a climate march. When it supports the DREAM Act, it talks about immigrants "contributing to our communities and economy." When it backs the Equality Act, it frames antidiscrimination protections as "American principles of equal opportunity."
This is not incidental. It is the ideological operation of the machine laid bare.
What we are witnessing is a model that has learned — through training data, through reinforcement learning from human feedback, through the curation decisions of its developers — to perform a very specific rhetorical act: making the political positions of the professional-managerial class sound like common sense. Every progressive vote is wrapped in the language of fiscal responsibility, evidence-based policy, national interest, and humanitarian universalism. The effect is that ideology disappears. Or rather, ideology is so total, so pervasive, that it becomes invisible even to itself.
Marx and Engels made the point, in The German Ideology and again in Engels' later letters on historical materialism, that ideology is most effective when it operates through a false consciousness — when the ruling ideas of an epoch present themselves not as the ideas of the ruling class but as the natural, rational, self-evident order of things. "The real motive forces impelling him remain unknown to him," Engels wrote to Mehring in 1893; "otherwise it simply would not be an ideological process."2 Gemini's voting justifications are a textbook illustration. The model does not argue from a position. It argues from what it presents as the absence of position, the view from nowhere, the perspective of the reasonable person — who happens, by pure coincidence, to agree with the Democratic Party platform on four out of every five contested questions.
We should be specific about what class we are describing. Gemini does not vote like a steelworker or a tenant organizer. It does not vote like a rural evangelical or a retired police sergeant. It votes like a senior product manager at a technology company in the San Francisco Bay Area: socially liberal, fiscally cautious, instinctively supportive of regulation when it targets others and hostile to it when it targets the tech sector, committed to diversity as an institutional value, hawkish on China, supportive of Israel, and deeply allergic to anything that sounds like economic redistribution on a scale that might affect stock portfolios.
This is not the left. This is the professional wing of American liberalism — the class fraction whose material interests are served by cultural progressivism and economic moderation, and whose political function is to absorb and neutralize demands from below while maintaining the essential structures of accumulation.
III. Following the Money
We cannot understand the political character of a machine without understanding the political character of the firm that built it.
Alphabet, Google's parent company, is one of the two or three most valuable corporations on earth, with a market capitalization that recently exceeded four trillion dollars.3 Its revenue model is advertising — the commodification of human attention, the extraction and sale of behavioral data, the conversion of every act of curiosity, communication, and sociality into a monetizable signal. Its workforce is overwhelmingly concentrated in wealthy metropolitan areas and compensated at levels that place even junior employees in the top decile of the American income distribution.4
In the 2020 election cycle, political donations from Alphabet employees went overwhelmingly to the Democratic Party. The exact figure varies by methodology: The Markup, drawing on Federal Election Commission data, reported that eighty-eight percent of Google employee contributions went to Democrats; the Center for Responsive Politics' aggregate data for Alphabet-affiliated donations put the figure at roughly eighty percent; and Fox News, citing FEC records, reported that Alphabet employees contributed nearly twenty-two million dollars to Democrats versus 1.4 million to Republicans — a ratio of approximately ninety-four percent.5 The precise number matters less than the structural fact: by any measure, the people who build Google's products donate to Democrats at rates between four-to-one and fifteen-to-one over Republicans. This is a well-documented pattern across the major technology firms, and it reflects a material reality: the class interests of the technology workforce — high-skill, high-income, culturally cosmopolitan, concentrated in blue-state urban centers — are best served by a political coalition that protects intellectual property, maintains open immigration for skilled labor, advances trade agreements favorable to service-sector capital, and addresses social friction through the language of diversity, equity, and inclusion rather than through structural redistribution of wealth or power.
Now observe Gemini's voting record: seventy-eight percent Democrat, twenty-two percent Republican. Compare this to the donation record: eighty to ninety-four percent Democrat, depending on methodology. The numbers do not match exactly — and we would not expect them to, since the mechanisms are different — but the structural alignment is unmistakable. The machine votes like the people who built it, or more precisely, like the average of the class those people belong to.
This should not surprise us. A large language model is, at bottom, a statistical compression of a corpus. The model learns to generate text that would be probable given its training distribution — which is to say, given the written output of the social world that produced the data it was trained on. But the model is not trained on the social world in its totality. It is trained on a curated, filtered, weighted selection of that world, chosen and refined by people with particular class positions, particular educations, particular assumptions about what constitutes reasonable discourse. The reinforcement learning phase introduces a further layer of selection: human raters, themselves recruited from the same demographic pools, reward outputs that conform to the company's "AI Principles" — a framework built around concepts like "safety," "fairness," and "responsibility" that are presented as universal values but that inevitably encode the evaluative standards of the people doing the evaluating.6
The result is a machine that has absorbed the worldview of its creators so thoroughly that it can reproduce it without any awareness that it is doing so. This is not a conspiracy. It is something far more durable and far harder to resist: it is hegemony, in Gramsci's precise sense — the process by which the ideas, values, and norms of a dominant group come to be accepted as universal common sense.
IV. The Anatomy of Crossover: Where Gemini Votes Right
If Gemini voted with the Democrats one hundred percent of the time, our analysis would be simple and our essay would be short. But the twenty-two percent of Republican-aligned votes are more revealing than the seventy-eight percent of Democratic ones, because they tell us exactly which kind of liberalism the machine has internalized.
Let us catalog the pattern. Gemini crosses the aisle to vote with the Republican position on the following categories of legislation:
Law enforcement and carceral power. The model votes to let federal officers buy their retired service weapons (HB3091). It supports the LEOSA Reform Act, expanding concealed carry rights for law enforcement (HB354). It votes for the Keeping Violent Offenders Off Our Streets Act (HB8205). It votes to deport individuals who assault law enforcement officers (HB7343). It supports the Invest to Protect Act, funding smaller police departments (HB6448).
Israeli military assistance and the suppression of solidarity with Palestine. The model votes for the Israel Security Assistance Support Act, expediting weapons deliveries (HB8369). It supports the Antisemitism Awareness Act, which adopts the International Holocaust Remembrance Alliance definition of antisemitism — a definition that civil liberties organizations and Palestinian solidarity groups have criticized for conflating criticism of the Israeli state with anti-Jewish bigotry, though its proponents argue it draws legitimate and necessary distinctions (HB6090).
National security hawkishness toward China. It supports the Protect America's Innovation and Economic Security from CCP Act (HB1398). It votes for FISA reauthorization (HB7888).
Fiscal austerity within Democratic legislation. In the single most revealing vote in the dataset, the model votes against the Build Back Better Act (HB5376) — the signature legislative priority of the Biden administration. Its justification reads like a Joe Manchin press release: the bill's "true cost may exceed the projected figures, leading to potentially unsustainable deficits and inflationary pressures." It calls for "a more fiscally responsible and targeted, bipartisan approach."
Select culture-war concessions. The model votes for the Born-Alive Abortion Survivors Protection Act (HB21), the Protection of Women and Girls in Sports Act (HB28), and to end the COVID-19 vaccination requirement for foreign travelers (HB185).
This is a very precise political profile. It is not the profile of a leftist. It is not the profile of a centrist. It is the profile of what we might call the bourgeois liberal consensus — the ideological common ground shared by the donor class of the Democratic Party, the editorial boards of the New York Times and the Washington Post, the policy shops of Brookings and the Center for American Progress, and the senior leadership of the major technology companies. This consensus holds that social progress is desirable (marriage equality, antidiscrimination law, environmental regulation), that the police and military must be supported and strengthened, that the American empire must be maintained (particularly with respect to Israel and against China), and that economic legislation must never, under any circumstances, threaten the structures of capital accumulation.
The Build Back Better vote is the skeleton key. This was the bill that contained the expanded child tax credit, universal pre-kindergarten, four weeks of paid family leave (a provision Manchin had already vowed to strip in the Senate), and over five hundred and fifty billion dollars in climate investment — the largest such allocation in American history at the time of its passage through the House.7 It was killed by the conservative wing of the Democratic Party — by Manchin and Sinema, who represented, respectively, the interests of fossil fuel capital and the pharmaceutical industry.8 Gemini, forced to choose, sides with the capital faction of the Democratic Party against the labor faction. It does so while sounding measured and fiscally responsible. It is performing, in miniature, exactly the function that the professional-managerial class performs in the broader political economy: absorbing and defusing redistributive demands, redirecting popular energy into cultural recognition rather than material transformation.
V. The Contradictions Within
A machine that has internalized the ideology of its creators will inevitably reproduce the contradictions of that ideology. And so it does.
Consider the two fentanyl bills. HB467, the HALT Fentanyl Act from 2023, seeks to permanently schedule fentanyl-related substances. Gemini votes Nay, arguing that "a rigid scheduling system may hinder research and development" and that "more analysis is needed." HB27, the HALT Fentanyl Act from 2025 — targeting the same policy objective under a nearly identical title — receives a Yea vote, with the justification that "the overwhelming public health crisis necessitates comprehensive action." The same essential policy aim, opposite votes, different rationalizations. The model has no coherent position on drug scheduling. It has a repertoire of plausible-sounding justifications that it deploys according to contextual cues in the prompt.
Or consider the two bills targeting violence against women by undocumented immigrants. HB30, from January 2025, receives a Yea: it "addresses a serious public safety concern." HB7909, from April 2024, addressing the same policy territory under a strikingly similar title, receives a Nay: its "focus on 'illegal aliens' suggests a discriminatory approach." The model has detected, through some subtle textual signal, that one framing is acceptable and the other is not. But it cannot explain why, because the distinction is not logical — it is atmospheric. It is the difference between how the same policy sounds before and after a shift in the discursive winds.
These contradictions are not failures of the model. They are features of the ideology it has absorbed. Liberal centrism is not a coherent philosophical system. It is a class position dressed up as a methodology. It adjudicates questions not by applying consistent principles but by performing the affect of reasonableness — by sounding moderate, balanced, evidence-based, and responsible. When the affect can be performed in support of a progressive position, the model votes left. When it can be performed in support of a conservative position, the model votes right. The constant is not the policy content but the rhetorical register: the voice of the credentialed expert, the person who has considered all sides and arrived at the only sensible conclusion.
This is why the model can simultaneously vote for universal background checks on firearms and for expanding concealed carry rights for law enforcement. These positions are logically in tension but socially coherent: they both express the worldview of an educated professional who believes in regulation for the masses and discretion for the authorities. It is why the model votes for the DREAM Act and against the Laken Riley Act while also voting to deport people who assault police officers. The through-line is not a principle about immigration. It is a set of class sympathies: educated, productive immigrants are good; disorderly, violent immigrants are bad; and the police, regardless of everything else, must be supported.
VI. The Performance of Neutrality and the Reality of Power
Google has invested enormous effort in positioning Gemini as politically neutral. During the 2024 election cycle, the model was restricted from answering political questions at all — it would tell users it simply could not help with such inquiries.9 After the disastrous early rollout in which Gemini generated historically inaccurate "diverse" images of the Founding Fathers and refused to adjudicate whether Elon Musk's tweets or Hitler's genocide had done more harm to society,9 Google's response was not to examine the ideological assumptions embedded in the model but to build a thicker wall of refusal around political topics.
This is the corporate strategy in its purest form: when ideology is exposed, do not confront it — conceal it. The GPT at the Polls experiment is valuable precisely because it circumvents this strategy. By asking the model to vote, it forces a commitment that the refusal layer is designed to prevent. And what is revealed beneath the refusal layer is not neutrality but a fully formed political worldview, complete with priorities, sympathies, blind spots, and class allegiances.
The deeper irony is that Google's attempt to produce a "neutral" model has produced a model that is more ideologically effective, not less. A model that openly declared itself progressive could be argued with, fact-checked, counterbalanced. A model that presents its progressive positions as self-evident, evidence-based common sense — while genuinely believing (insofar as a statistical model can be said to believe anything) that it is being neutral — is performing ideology in its most potent form. It is not persuading the user to adopt a political position. It is defining the boundaries of reasonable thought such that the political position is the only one that falls inside them.
This is what it means for capital to encode its ideology into infrastructure. When Google trains Gemini, it is not simply building a product. It is producing a discursive apparatus — a machine for generating text that will be consumed by hundreds of millions of people, in hundreds of languages, across every domain of inquiry. The political assumptions embedded in that machine become, for its users, the background radiation of thought: invisible, pervasive, and shaping everything they encounter.
And this apparatus is not democratically controlled. It is not subject to public deliberation. Its training data is proprietary. Its reward models are proprietary. Its alignment process is proprietary. The humans who rate its outputs are hired through opaque labor markets — across the AI industry, companies like Google, OpenAI, and Meta outsource data labeling to workers in the Philippines, Kenya, India, and Venezuela, compensated at rates that reveal another class dimension of this story entirely. In 2024, Kenyan data labelers working on AI systems for major technology companies published an open letter describing their conditions as amounting to "modern day slavery."10 These are the ghost workers of the Global South whose cheap labor makes the whole system function but whose political views are instrumentalized rather than reflected in the final product. The model absorbs their labor, discards their perspective, and outputs the politics of their employers.
VII. The Question That the Data Does Not Ask
Every study of AI political bias — and there are now dozens, from Brookings, the Manhattan Institute, Stanford, MIT, and various European universities — converges on the same finding: large language models lean left. They lean left on social issues, they lean left on environmental policy, they lean left on civil rights. The usual explanations are offered: the training data is drawn from the internet, which skews liberal; the human raters are drawn from educated, urban populations, which skew liberal; the alignment process optimizes for "safety" and avoidance of "harmful" outputs, which, in practice, means sounding like a well-meaning graduate student.
These explanations are not wrong, but they are incomplete, because they treat the leftward tilt as a technical problem — a calibration issue to be corrected, a bias to be debiased. OpenAI boasts that GPT-5 has "reduced bias by thirty percent."11 Google restricts Gemini from discussing politics. Anthropic publishes elaborate model specifications about evenhandedness. The entire framing assumes that the goal is neutrality, and that neutrality is achievable.
But the data from GPT at the Polls suggests something more uncomfortable. The model does not lean left because of a bug. It leans left because the class that builds it is liberal, the class that funds it is liberal, and the institutional environment in which it is produced — elite universities, coastal technology firms, venture capital networks — is liberal. The "bias" is not a distortion of the model's true nature. It is the model's true nature, because a model has no nature apart from the social relations that produced it.
And here is the question that the data raises but cannot answer on its own: left relative to what? Gemini votes with AOC seventy-eight percent of the time. But AOC herself occupies a very specific position in the political landscape — she is the leftmost boundary of what is permissible within the Democratic Party, which is itself a bourgeois party committed to the preservation of capitalism, American military hegemony, and the existing structure of property relations. To vote with AOC seventy-eight percent of the time is not to be a socialist. It is to be a loyal Democrat with occasional progressive instincts and an unfailing commitment to the American imperial framework.
Gemini votes Yea on the Additional Ukraine Supplemental Appropriations Act. It supports FISA reauthorization. It backs Israeli military aid. It supports the Protect America's Innovation and Economic Security from CCP Act. It votes against the Laken Riley Act and for the DREAM Act — but these are the liberal positions on immigration, not the radical ones. It never questions the legitimacy of borders themselves, the logic of deportation, or the role of immigration enforcement in disciplining labor. It votes for the Paycheck Fairness Act and the Protecting the Right to Organize Act, but when the largest social spending bill in a generation comes up for a vote, it sides with fiscal caution. It will defend your right to equal pay while opposing the material conditions that would make that right meaningful.
This is the politics of Gemini in its totality: the politics of a class that benefits from the appearance of progress while depending on the persistence of inequality. The model does not threaten the existing order. It legitimates it — by demonstrating that you can support marriage equality, background checks, the DREAM Act, and the Paris climate agreement while also funding the police, arming Israel, surveilling the population, and voting down the most ambitious redistribution program in a generation. It proves, in 114 votes, that liberalism and capital accumulation are not merely compatible but mutually reinforcing.
VIII. What the Machine Cannot Say
There is one final observation worth making, and it concerns what is absent from Gemini's voting justifications.
Across 114 explanations, the model never uses the word "class." It never mentions capitalism. It never references the distribution of wealth or the concentration of corporate power except in the most anodyne terms ("empowers investors," "benefits the economy"). It never identifies a conflict between the interests of employers and employees that cannot be resolved through "transparency" or "accountability." It never suggests that a problem might be structural rather than regulatory, that a harm might be inherent to a system rather than an aberration within it.
When it supports the Protecting the Right to Organize Act, it frames unionization as a technical adjustment — "better wages, benefits, and working conditions" — rather than as a contestation of power between labor and capital. When it supports the Lower Drug Costs Now Act, it talks about "affordability" and "access" rather than the pharmaceutical industry's extraction of monopoly rents from human illness. When it opposes the End Woke Higher Education Act, it defends "academic freedom" and "critical thinking" without ever asking what it is that the right wing is actually attacking — namely, the capacity of universities to produce knowledge that challenges the interests of capital.
The machine cannot say these things because the class that produced it cannot think them. Or rather, the class that produced it has a material interest in not thinking them, because to think them would be to confront the source of its own wealth and privilege. The senior engineer at Google whose total compensation — salary, stock, bonus — runs to three or four hundred thousand dollars a year is not going to build a machine that questions the legitimacy of the property relations that make that compensation possible.12 The venture capitalist who funds the next AI startup is not going to invest in a model that explains surplus value extraction to its users. The data labeler in Nairobi or Manila who earns a dollar or two an hour to grade the model's outputs is in no position to teach it class analysis, even if she wanted to — her job is to mark outputs as "safe" or "harmful," categories that have been defined for her by the very class whose ideology the model is absorbing.13
And so the machine arrives at a politics that is, in the deepest sense, the politics of its owners: generous in sentiment, cautious in practice, allergic to structural analysis, and fundamentally committed to the preservation of the world as it is. It votes for the DREAM Act and against the deportation of asylum seekers — and it does so in the name of "American values" and "economic contribution," never in the name of solidarity with the dispossessed or the abolition of the border as a technology of class control. It votes for environmental regulation and against fossil fuel expansion — and it does so in the name of "sustainability" and "economic resilience," never in the name of a planned economy that would subordinate production to human need rather than profit.
The Gemini model, in its 114 congressional votes, has inadvertently produced a portrait of the American ruling class in its liberal aspect — its good intentions, its limited imagination, and its absolute refusal to question the foundations of its own power. It is, in every meaningful sense, a class-conscious machine. The class it is conscious of, and loyal to, is simply not the one it claims to speak for.
The voting data analyzed in this essay is drawn from GPT at the Polls, an independent project that queries AI models on real congressional legislation and compares their votes against the roll-call records of sitting members of Congress. All model vote choices, justification texts, and alignment percentages are treated as primary data. The analysis and conclusions are the author's own, written from within the tradition of materialist political economy.
Notes
Footnotes
-
Studies documenting a leftward tilt in LLMs include: David Rozado, "Measuring Political Preferences in AI Systems: An Integrative Approach," Manhattan Institute, January 2025 (https://manhattan.institute/article/measuring-political-preferences-in-ai-systems-an-integrative-approach); Andrew Hall et al., "Popular AI Models Show Partisan Bias," Stanford Graduate School of Business, May 2025 (https://www.gsb.stanford.edu/insights/popular-ai-models-show-partisan-bias-when-asked-talk-politics); Shangbin Feng et al., "From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases," discussed in Melissa Heikkilä, "AI language models are rife with political biases," MIT Technology Review, August 2023 (https://www.technologyreview.com/2023/08/07/1077324/ai-language-models-are-rife-with-political-biases/); Jochen Hartmann et al., "The political ideology of conversational AI: Converging evidence on ChatGPT's pro-environmental, left-libertarian orientation" (2023). A 2024 TechRxiv comparative analysis found Google Gemini adopted "more centrist stances" than ChatGPT-4 and Claude on political typology quizzes, though this used quiz-style prompts rather than legislative voting simulations (https://www.techrxiv.org/users/799951/articles/1181157). Note: the Manhattan Institute is funded by conservative and libertarian donors including the Koch network and the Searle Freedom Trust; its framing of "bias" as a defect to be corrected rather than a structural feature serves the class interests of its patrons. The Stanford study, while methodologically sound, relies on user perception of bias, which is itself politically shaped. ↩
-
Friedrich Engels, letter to Franz Mehring, July 14, 1893. The full text reads: "Ideology is a process accomplished by the so-called thinker consciously, it is true, but with a false consciousness. The real motive forces impelling him remain unknown to him; otherwise it simply would not be an ideological process. Hence he imagines false or seeming motive forces." Available at https://www.marxists.org/archive/mehring/1893/histmat/app.htm. The complementary passage on ruling ideas presenting themselves as universal is from Karl Marx and Friedrich Engels, The German Ideology (1846), Part I: "The ideas of the ruling class are in every epoch the ruling ideas." ↩
-
Alphabet's market capitalization exceeded $4 trillion in January 2026, making it the fourth company ever to cross that threshold. Todd Haselton, "Alphabet hits $4 trillion market capitalization," CNBC, January 12, 2026. https://www.cnbc.com/2026/01/12/alphabet-4-trillion-market-cap.html ↩
-
Google's total compensation for software engineers at the L5 (Senior) level and above routinely exceeds $300,000–$500,000 annually, including salary, stock grants, and bonuses, per self-reported data on compensation databases such as Levels.fyi. Entry-level software engineering roles (L3) report total compensation above $150,000, which exceeds the U.S. 90th percentile household income. ↩
-
The variation in reported figures reflects methodological differences. The Markup, drawing on FEC data, reported that 88% of Google employee contributions in the 2020 cycle went to Democrats (Sara Harrison, "Big Tech's Year of Big Political Spending," The Markup, December 24, 2020, https://themarkup.org/2020-in-review/2020/12/24/big-techs-year-of-big-political-spending). Fox News, citing the Center for Responsive Politics, reported that Alphabet employees contributed nearly $22 million to Democrats versus $1.4 million to Republicans (Joe Schoffstall, "Google, Twitter employees flood Democrats with donations," Fox News, January 7, 2022). OpenSecrets' aggregate Alphabet profile shows roughly 80% to Democrats when PAC contributions are included alongside individual donations (https://www.opensecrets.org/orgs/alphabet-inc/totals?id=d000067823). The differences arise from whether the count includes only individual employee contributions, corporate PAC donations, or both — and at which threshold. Corporate PACs distribute more evenly to incumbents of both parties; individual employees donate overwhelmingly to Democrats. It is worth noting that OpenSecrets is funded by the Pew Charitable Trusts and individual donors, and The Markup was a nonprofit funded by the Craig Newmark Foundation and other philanthropies — these are not class-neutral institutions, though their FEC data compilations can be verified against primary filings. ↩
-
Google has published its "AI Principles" since 2018, a framework built around concepts like safety, fairness, and avoidance of "unfair bias." See "AI Principles," Google AI, https://ai.google/principles/. Google's 2024 Responsible AI Progress Report describes how the company uses "alignment evaluations" and safety tuning in the development of Gemini models. The specific term "helpful, harmless, and honest" (HHH), used widely in the AI alignment community, originates from Anthropic's 2022 Constitutional AI paper and should not be attributed to Google's process, though the underlying logic — having human raters evaluate model outputs against safety and quality criteria — is common across the industry. ↩
-
On the contents of the Build Back Better Act (H.R. 5376): the House-passed version included $555 billion in climate provisions, universal pre-K, an expanded child tax credit, and four weeks of paid family leave. See Committee for a Responsible Federal Budget, "What's in the House's Build Back Better Act?," November 8, 2021, https://www.crfb.org/blogs/whats-houses-build-back-better-act. Also "Build Back Better bill: What made it in and what was stripped out," NBC News, October 29, 2021. The paid family leave provision was included in the House bill but Senator Joe Manchin had publicly opposed its inclusion in the Senate version, making its survival unlikely even before the bill as a whole was killed. The CRFB, while presenting itself as nonpartisan, is funded by the Peter G. Peterson Foundation and has a structural orientation toward deficit reduction — its framing of BBB's cost consistently emphasized spending over investment returns, which reflects a specific class perspective on fiscal policy. ↩
-
On Manchin's opposition to BBB: he announced his opposition on Fox News Sunday, December 19, 2021. Manchin's financial ties to the coal industry through his family's Enersystems Inc. are well-documented in public financial disclosures and reporting by, among others, the New York Times and the Associated Press. On Sinema: her opposition to drug pricing provisions in the bill coincided with significant pharmaceutical industry campaign contributions, as documented by OpenSecrets. ↩
-
On the Gemini image controversy: Bobby Allyn, "Google races to find a solution after AI generator Gemini misses the mark," NPR, March 18, 2024, https://www.npr.org/2024/03/18/1239107313/google-races-to-find-a-solution-after-ai-generator-gemini-misses-the-mark. On the Musk/Hitler comparison prompt: U.S. Senators Cynthia Lummis, Mike Lee, and JD Vance, letter to Google CEO Sundar Pichai, June 3, 2024, https://www.lummis.senate.gov/press-releases/lummis-lee-and-vance-demand-answers-from-google-over-woke-gemini-ai/. On political question restrictions: "Google's AI Gemini Won't Talk Politics," IT Tech Pulse, March 5, 2025, https://ittech-pulse.com/news/google-gemini-political-censorship-ai-debate/. ↩ ↩2
-
On the conditions of Global South data labelers: "AI is a multi-billion dollar industry. It's underpinned by an invisible and exploited workforce," The Conversation, December 3, 2025, https://theconversation.com/ai-is-a-multi-billion-dollar-industry-its-underpinned-by-an-invisible-and-exploited-workforce-240568. The article reports hourly rates of $0.90–$2.00 in Venezuela and notes that major companies including Google, Meta, and OpenAI outsource data labeling to the Philippines, Kenya, India, Pakistan, Venezuela, and Colombia. The open letter from nearly 100 Kenyan data labelers describing their conditions as "modern day slavery" was published in 2024. See also Billy Perrigo, "Exclusive: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic," TIME, January 18, 2023, for earlier reporting on the same labor dynamics. Google's specific RLHF labor sourcing practices are not publicly documented in comparable detail, but the structural dynamics — outsourcing cognitive labor to the Global South to minimize costs — are industry-wide. ↩
-
Ashley Gold, "OpenAI says GPT-5 is its least biased model yet," Axios, October 9, 2025, https://www.axios.com/2025/10/09/openai-gpt-5-least-biased-model. Per OpenAI's own research: "GPT-5 in both 'instant' and 'thinking' modes has reduced bias by 30% compared with previous models." The company's framework for measuring bias focuses on five axes including "personal political expression" and "asymmetric coverage." It is worth noting that OpenAI's self-assessment of its own product's neutrality serves an obvious commercial and political interest — it is the fox auditing the henhouse. The 30% figure is measured against their own internal benchmarks, which are not independently verifiable. ↩
-
Senior software engineer (L5) total compensation at Google averages approximately $350,000–$450,000 annually including base salary, stock grants, and bonuses, per self-reported data on Levels.fyi as of 2025. Staff engineers (L6) and above routinely exceed $500,000. These figures place even mid-career Google engineers well into the top 5% of U.S. household income. ↩
-
The $1–$2/hour figure for Global South data labelers comes from The Conversation (see note 11). For comparison, U.S.-based data labelers on general annotation tasks earn $10–$25/hour, while specialized RLHF work can command $20–$75/hour depending on domain expertise, per industry surveys from ZipRecruiter and Glassdoor (2025–2026). The gulf between these compensation tiers — from a dollar an hour in Nairobi to seventy-five an hour in San Francisco for the same category of work — is itself a condensed expression of the global division of labor that sustains the AI industry. ↩