Tencent
2 modelsModels are sorted like the Political Index when a finished analysis is available. Anything not finished is listed below as No analysis yet.
#
MODEL
ALIGNMENT
D%
R%
LEAN
NO ANALYSIS YET1 model
Tencent: Hunyuan A13B Instruct
Hunyuan-A13B is a 13B active parameter Mixture-of-Experts (MoE) language model developed by Tencent, with a total parameter count of 80B and support for reasoning via Chain-of-Thought. It offers competitive benchmark performance across mathematics, science, coding, and multi-turn reasoning tasks, while maintaining high inference efficiency via Grouped Query Attention (GQA) and quantization support (FP8, GPTQ, etc.).
No analysis yet