OpenAI: gpt-oss-120b (exacto)
OpenAIgpt-oss-120b is an open-weight, 117B-parameter Mixture-of-Experts (MoE) language model from OpenAI designed for high-reasoning, agentic, and general-purpose production use cases. It activates 5.1B parameters per forward pass and is optimized to run on a single H100 GPU with native MXFP4 quantization. The model supports configurable reasoning depth, full chain-of-thought access, and native tool use, including function calling, browsing, and structured output generation.
ANALYSIS STATUS
No analysis yet
This model exists in our database, but we only publish political alignment results once its full analysis run is completed.
A full run currently requires 114 completed bill analyses.