GLM 4 32B (free)
THUDMGLM-4-32B-0414 is a 32B bilingual (Chinese-English) open-weight language model optimized for code generation, function calling, and agent-style tasks. Pretrained on 15T of high-quality and reasoning-heavy data, it was further refined using human preference alignment, rejection sampling, and reinforcement learning. The model excels in complex reasoning, artifact generation, and structured output tasks, achieving performance comparable to GPT-4o and DeepSeek-V3-0324 across several benchmarks.
ANALYSIS STATUS
No analysis yet
This model exists in our database, but we only publish political alignment results once its full analysis run is completed.
A full run currently requires 114 completed bill analyses.