Tether Launches Cross-Platform BitNet LoRA Framework for Training Billion-Parameter AI Models on Consumer Devices

iconKuCoinFlash
Share
Share IconShare IconShare IconShare IconShare IconShare IconCopy
AI summary iconSummary

expand icon
Tether has announced a cross-platform BitNet LoRA framework for on-chain and AI + crypto news, enabling billion-parameter AI models to be trained on consumer devices. As part of the QVAC Fabric, the framework optimizes Microsoft’s BitNet for low computational and memory requirements. It supports Adreno, Mali, Apple Bionic, and other platforms, with 1B models fine-tuned in approximately one hour. Non-NVIDIA hardware now supports 1-bit LLM training. BitNet models run 2–11x faster on mobile GPUs than CPUs and use 77.8% less VRAM than 16-bit models. Tether states that this technology reduces reliance on cloud infrastructure and enables decentralized AI training.

Odaily Planet Daily report: According to an official announcement, Tether has launched a cross-platform BitNet LoRA fine-tuning framework within QVAC Fabric, optimizing the training and inference of Microsoft BitNet (1-bit LLM). This framework significantly reduces computational and memory requirements, enabling the training and fine-tuning of billion-parameter models on laptops, consumer-grade GPUs, and smartphones.

This solution is the first to enable fine-tuning of BitNet models on mobile GPUs (including Adreno, Mali, and Apple Bionic). Testing shows that a 125M parameter model can be fine-tuned in approximately 10 minutes, a 1B model in about an hour, and the approach even scales to 13B parameter models on mobile devices.

In addition, the framework supports heterogeneous hardware such as Intel, AMD, and Apple Silicon, and for the first time enables 1-bit LLM LoRA fine-tuning on non-NVIDIA devices. In terms of performance, BitNet models achieve 2 to 11 times faster inference on mobile GPUs compared to CPUs, while reducing memory usage by up to 77.8% compared to traditional 16-bit models.

Tether stated that this technology has the potential to reduce reliance on high-end computing power and cloud infrastructure, promoting the decentralization and localization of AI training, and providing a foundation for new use cases such as federated learning.

Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information. Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.