ME News reports that on April 6 (UTC+8), Hugging Face released the Gemma-4-21B-REAP model. According to the developers, the model performs well on reasoning tasks, even achieving improved accuracy. In terms of VRAM requirements, the model can run on as little as 12GB of VRAM with limited context, and requires 16GB for full context. The developers encourage members of the MLX and GGUF communities to try it out. (Source: InFoQ)
Hugging Face Launches Gemma-4-21B-REAP Model with Strong Reasoning Performance
KuCoinFlashShare






Hugging Face released the Gemma-4-21B-REAP model on April 6 (UTC+8), demonstrating strong reasoning capabilities. The model enhances accuracy in complex tasks and runs on as little as 12GB of GPU memory for partial context and 16GB for full context. Members of the MLX and GGUF communities are invited to test it. This update brings fresh on-chain news for crypto enthusiasts.
Source:Show original
Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information.
Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.