Tip: https://x.com/minimaxagent/status/2034237018113208619?s=46
MiniMax released M2.7, a new frontier model that the company says participated substantively in its own development.

MiniMax released M2.7, a new frontier model that the company says participated substantively in its own development. According to the announcement, M2.7 helped build its own reinforcement learning harness, update its own memory, and run iterative optimization loops during training. The model now handles an estimated 30–50% of MiniMax's internal RL research workflow, the company said.
The performance numbers are notable. On SWE-Pro, M2.7 scored 56.22%, matching GPT-5.3-Codex. On the hallucination benchmark, it posted 34% versus Claude Sonnet 4.6 at 46%. On MLE Bench Lite across 22 ML competitions, it achieved a 66.6% medal rate, tying Gemini 3.1 and sitting below Opus 4.6 (75.7%) and GPT-5.4 (71.2%).
M2.7 ran more than 100 rounds of autonomous optimization on its own programming performance scaffold, achieving a 30% improvement in that scaffold's output. That loop — the model improving the infrastructure used to improve the model — is the part worth watching. MiniMax is calling it the first model to "deeply participate in its own evolution."
The release follows a leak on March 16, when internal documentation briefly appeared on MiniMax's official docs site before being pulled the same day. The company posted official availability on March 18 via its agent account.
API pricing is unchanged at $0.30 per million input tokens and $1.20 per million output tokens. M2.7 is proprietary; its predecessor M2.5 was open-weight. MiniMax is the second Chinese AI lab after z.ai to shift from open-weight releases to proprietary frontier models.
The self-building claim is significant if it holds up — a model meaningfully participating in its own training pipeline would be a first. But MiniMax has not published a technical report, and the mechanism behind the self-participation is not explained in the announcement. The benchmark numbers are independently testable; the recursive improvement claim is currently asserted, not demonstrated.

