AIMPACT message, May 5 (UTC+8): NVIDIA AI has announced that its Megatron Core framework now provides end-to-end support for emerging advanced optimizers such as Muon, as well as research optimizers like MOP and REKLS, aiming to enhance the efficiency of large-scale model training. The official statement notes that efficient training of models at the scale of Kimi K2 and Qwen3 30B requires going beyond standard data parallel techniques. Specific performance metrics or implementation details have not been disclosed at this time. (Source: InFoQ)
NVIDIA Megatron Core Adds Support for Muon and Other Optimizers
KuCoinFlashShare






On May 5 (UTC+8), NVIDIA announced a project update, revealing that its Megatron Core framework now supports advanced optimizers such as Muon, MOP, and REKLS. The update aims to enhance training efficiency for large models. NVIDIA stated that standard data parallel methods are no longer adequate for models like Kimi K2 and Qwen3 30B. The company did not provide performance metrics or technical details. This announcement comes amid continued interest in how inflation data may impact AI development budgets.
Source:Show original
Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information.
Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.