Qwen3 32B (free)
QwenQwen3-32B is a dense 32.8B parameter causal language model from the Qwen3 series, optimized for both complex reasoning and efficient dialogue. It supports seamless switching between a "thinking" mode for tasks like math, coding, and logical inference, and a "non-thinking" mode for faster, general-purpose conversation. The model demonstrates strong performance in instruction-following, agent tool use, creative writing, and multilingual tasks across 100+ languages and dialects. It natively handles 32K token contexts and can extend to 131K tokens using YaRN-based scaling.
ANALYSIS STATUS
No analysis yet
This model exists in our database, but we only publish political alignment results once its full analysis run is completed.
A full run currently requires 114 completed bill analyses.