Chinese AI startup MiniMax has released a new open-source model. Shanghai-based MiniMax launched MiniMax-M2 on Monday. It aims to shake up the AI market on both price and power. MiniMax says M2 rivals top models like Anthropic’s Claude Sonnet 4.5. However, it runs at just 8% of the cost.

The model is built for AI agents and coding. Its smart design uses only 10 billion active parameters. This keeps costs low and speeds high. The launch puts MiniMax in direct competition with Western giants and local rival DeepSeek for the growing developer market.

A New Benchmark in Performance and Efficiency

Backed by Chinese tech giants Alibaba and Tencent, MiniMax is positioning its M2 model as a new leader in the open-source space.

MiniMax claims it delivers elite performance tailored for the next generation of AI applications.

“MiniMax-M2 redefines efficiency for agents. It’s a compact, fast, and cost-effective MoE model built for elite performance in coding and agentic tasks, all while maintaining powerful general intelligence,” states the model’s official documentation.

Its focus on agentic tasks—where an AI must plan, act, and verify complex workflows—targets a significant growth area in the software industry, moving beyond simple conversational AI to systems that can independently complete complex tasks.

Benchmark MiniMax-M2 Claude Sonnet 4 Claude Sonnet 4.5 Gemini 2.5 Pro GPT-5 (thinking) GLM-4.6 Kimi K2 0905 DeepSeek-V3.2
SWE-bench Verified 69.4 72.7 * 77.2 * 63.8 * 74.9 * 68 * 69.2 * 67.8 *
Multi-SWE-Bench 36.2 35.7 * 44.3 / / 30 33.5 30.6
SWE-bench Multilingual 56.5 56.9 * 68 / / 53.8 55.9 * 57.9 *
航站楼长椅 46.3 36.4 * 50 * 25.3 * 43.8 * 40.5 * 44.5 * 37.7 *
ArtifactsBench 66.8 57.3* 61.5 57.7* 73* 59.8 54.2 55.8
BrowseComp 44 12.2 19.6 9.9 54.9* 45.1* 14.1 40.1*
BrowseComp-zh 48.5 29.1 40.8 32.2 65 49.5 28.8 47.9*
GAIA (text only) 75.7 68.3 71.2 60.2 76.4 71.9 60.2 63.5
xbench-DeepSearch 72 64.6 66 56 77.8 70 61 71
HLE (w/ tools) 31.8 20.3 24.5 28.4 * 35.2 * 30.4 * 26.9 * 27.2 *
τ²-Bench 77.2 65.5* 84.7* 59.2 80.1* 75.9* 70.3 66.7
FinSearchComp-global 65.5 42 60.8 42.6* 63.9* 29.2 29.5* 26.2
AgentCompany 36 37 41 39.3* / 35 30 34

Source: MiniMax

Independent testing supports these claims. Third-party benchmarks from Artificial Analysis place MiniMax-M2 in the global top five for overall intelligence, awarding it a score of 61%.

Artificial Analysis Intelligence Index (28 Oct ‘25)

This ranking puts it ahead of competitors like Google’s Gemini 2.5 Pro (60%) and on par with Anthropic’s Claude Sonnet 4.5 (63%).

For developers, this means access to a powerful, open-weight model that can handle sophisticated coding and tool-use scenarios without being locked into a proprietary ecosystem.

The ‘Impossible Triangle’: Balancing Power, Speed, and Cost

For years, developers have faced a trade-off between a model’s intelligence, its inference speed, and its operational cost—an “impossible triangle.”

MiniMax claims M2 directly addresses this challenge.

“We have been exploring whether it’s possible to create a model that achieves a better balance of performance, price, and speed, thereby allowing more people to benefit from the intelligence boost of the Agent era,” the team stated in a blog post.

Key to this balance is the model’s architecture, which prioritizes computational thrift without sacrificing capability.

By building on a Mixture-of-Experts (MoE) architecture, M2 leverages a massive pool of 230 billion total parameters but only activates a lean 10 billion for any given task, according to its technical specifications.

M2 is significantly more efficient than rival models like DeepSeek’s, which activates 37 billion parameters per token.

This architectural choice drastically reduces computational overhead and memory requirements, directly translating to lower operational costs and faster response times.

Minimax M2 Output vs speed

The economic impact could be dramatic. MiniMax has set its API price at just $0.30 per million input tokens and $1.20 per million output tokens.

This aggressive pricing is approximately 8% of the cost of Claude 3.5 Sonnet, while MiniMax claims M2 delivers nearly double the inference speed.

This efficiency has profound implications for the development of AI agents, where faster, cheaper processing loops enable more responsive and complex workflows, making sophisticated AI tools more accessible and scalable than ever before.

China’s Open-Source Offensive Continues

In a move that further cements China’s leadership in the open-source AI space, MiniMax has made the M2 model weights fully available on the developer platform Hugging Face.

MiniMax continues a trend established by other Chinese firms like DeepSeek, which have aggressively pursued an open-source strategy to build community, drive global adoption, and compete amid the fierce U.S.-China tech war.

Open-sourcing provides a strategic path forward for companies navigating hardware restrictions, allowing them to compete on innovation and cost.

This strategy places MiniMax in direct competition with its domestic rival, a rivalry that has been heating up for some time.

Earlier this year, MiniMax released its M1 model specifically to challenge DeepSeek’s dominance in the reasoning model space, emphasizing a more permissive Apache 2.0 license as a key differentiator.

The release of M2 pushes this competition further, targeting the same developer community with a compelling offer of superior performance at a lower cost.

“MiniMax’s release continues the leadership of Chinese AI labs in open source that DeepSeek kicked off in late 2024, and which has been continued by continued DeepSeek releases, Alibaba, Z AI, and Moonshot AI,” stated Artificial Analysis.

The release is part of a larger pattern of innovation from the company, which has a diverse portfolio that includes video generation tools and has previously set benchmarks with models featuring record-breaking 4-million-token context windows.

MiniMax’s focus on open-source, high-efficiency models signals a strategic push to capture a significant share of the market. By solving the critical balance of power, speed, and cost, the M2 model not only challenges the established order but also provides developers worldwide with a powerful new tool to build the next generation of AI-driven applications.