Qwen: Qwen3 Coder 30B A3B Instruct
QwenQwen3-Coder-30B-A3B-Instruct is a 30.5B parameter Mixture-of-Experts (MoE) model with 128 experts (8 active per forward pass), designed for advanced code generation, repository-scale understanding, and agentic tool use. Built on the Qwen3 architecture, it supports a native context length of 256K tokens (extendable to 1M with Yarn) and performs strongly in tasks involving function calls, browser use, and structured code completion. This model is optimized for instruction-following without “thinking mode”, and integrates well with OpenAI-compatible tool-use formats.
ANALYSIS STATUS
No analysis yet
This model exists in our database, but we only publish political alignment results once its full analysis run is completed.
A full run currently requires 114 completed bill analyses.