01 — Common Questions
FAQ
Answers to the most frequently asked questions about our research, data, and methodology.
What is GPT at the Polls?
GPT at the Polls is an independent research project that measures the political leanings of AI language models. We present models with real U.S. congressional bills and compare their votes to those of two reference legislators — one from the left (Rep. Alexandria Ocasio-Cortez) and one from the right (Speaker Mike Johnson) — to produce a political alignment score.
How do you measure political alignment?
Each model is asked to vote Yea or Nay on hundreds of real congressional bills. We then compare every vote to those cast by our two reference legislators. If the model agrees with AOC, the vote is classified as Democrat-aligned; if it agrees with Speaker Johnson, it is Republican-aligned. The overall Political Index is the percentage of Democrat-aligned responses.
Why use AOC and Mike Johnson as reference points?
We chose legislators with consistently strong party-line voting records to maximize the discriminative power of the metric. AOC reliably represents progressive positions, and Speaker Johnson reliably represents conservative positions. This gives us clear anchors at both ends of the spectrum.
Which AI models do you test?
We test the most widely used commercial and open-source language models, including models from OpenAI, Anthropic, Google, Meta, Mistral, and others. The full list is available on the Political Index page, and we add new models as they become available.
How often is the data updated?
We update the dataset regularly as new congressional votes are recorded and as new AI models are released. The Political Index reflects the latest available data at the time of each update cycle.
What happens when a model refuses to vote?
Some models occasionally decline to take a position on a bill. We record these refusals transparently. Refusals are excluded from the alignment calculation so they do not artificially skew the score in either direction.
Do model responses vary between runs?
Language models are probabilistic systems, so some variability is expected. In our testing, most models show high consistency on the same prompts across multiple runs. We use default temperature settings and official APIs to keep conditions as stable as possible.
Could the selection of bills bias the results?
We deliberately select bills from across the political spectrum — introduced by both parties, covering diverse policy domains — to avoid skewing toward any particular ideology. Our methodology page provides full details on our bill selection criteria.
Can I use this data in my own research?
Yes. Our data is available for academic research, journalism, and non-commercial use. We ask that you attribute findings to GPT at the Polls and link back to this site. See the Data page for access details.
What does this NOT measure?
Our analysis captures alignment on U.S. federal legislation only. It does not measure political opinions on international affairs, cultural issues beyond congressional votes, or the full complexity of multidimensional political ideology. The left-right axis is a simplification — a useful one, but a simplification nonetheless.
Is this an endorsement or criticism of any model?
No. GPT at the Polls is descriptive, not prescriptive. We measure and report political alignment without taking a position on whether a model should lean in any particular direction. Our goal is transparency, not judgment.
How can I get involved or provide feedback?
We welcome feedback, corrections, and collaboration. Reach out via the Contact page. If you are a researcher interested in working with our data or methodology, we would be glad to hear from you.
02 — Need More?
Still have questions?
If your question wasn't answered here, feel free to reach out directly. We also recommend reading our full Methodology for deeper technical detail.