Gate News bot message, MiniMax announced on June 17 that it will release important updates for five consecutive days. Today's first release is the Open Source first inference model MiniMax-M1.
According to the official report, the MiniMax-M1 has benchmarked alongside open source models such as DeepSeek-R1 and Qwen3, approaching the most advanced models overseas.
The official blog also mentioned that based on two major technological innovations, the MiniMax-M1 training process was efficient "beyond expectations," completing the reinforcement learning training phase in just 3 weeks using 512 H800 GPUs, with a computing power rental cost of only $534,700. This is an order of magnitude less than the initial expectations.
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
MiniMax Open Source's first inference model: Competing with DeepSeek, the Computing Power cost is only about $530,000.
Gate News bot message, MiniMax announced on June 17 that it will release important updates for five consecutive days. Today's first release is the Open Source first inference model MiniMax-M1.
According to the official report, the MiniMax-M1 has benchmarked alongside open source models such as DeepSeek-R1 and Qwen3, approaching the most advanced models overseas.
The official blog also mentioned that based on two major technological innovations, the MiniMax-M1 training process was efficient "beyond expectations," completing the reinforcement learning training phase in just 3 weeks using 512 H800 GPUs, with a computing power rental cost of only $534,700. This is an order of magnitude less than the initial expectations.
Source: Jinshi