PANews, March 21: According to an official announcement, Tether has launched a cross-platform BitNet LoRA fine-tuning framework within QVAC Fabric, optimizing the training and inference of Microsoft BitNet (1-bit LLM). This framework significantly reduces computational and memory requirements, enabling training and fine-tuning of billion-parameter models on laptops, consumer-grade GPUs, and smartphones. This solution marks the first time BitNet models can be fine-tuned on mobile GPUs, including Adreno, Mali, and Apple Bionic. Testing shows that a 125M parameter model can be fine-tuned in approximately 10 minutes, a 1B model in about one hour, and the framework even scales to 13B parameter models on mobile devices. Additionally, the framework supports heterogeneous hardware such as Intel, AMD, and Apple Silicon, and is the first to enable 1-bit LLM LoRA fine-tuning on non-NVIDIA devices. In terms of performance, BitNet model inference on mobile GPUs is 2 to 11 times faster than on CPUs, while reducing VRAM usage by up to 77.8% compared to traditional 16-bit models. Tether states that this technology has the potential to break reliance on high-end computational power and cloud infrastructure, advancing AI training toward decentralization and localization, and providing a foundation for emerging applications such as federated learning.
Tether Launches Cross-Platform BitNet LoRA Framework for Training Billion-Parameter Models on Consumer Devices
PANewsShare






Tether has unveiled a cross-platform BitNet LoRA framework for on-chain and crypto news, enabling the training of Microsoft’s 1-bit BitNet models on consumer hardware. The tool allows billion-parameter models to run on laptops, smartphones, and GPUs such as Adreno, Mali, and Apple Bionic. A 1B parameter model takes approximately one hour to fine-tune. The system supports Intel, AMD, and Apple Silicon, bringing 1-bit LLM LoRA tuning to non-NVIDIA devices for the first time. BitNet models run 2–11x faster on mobile GPUs compared to CPUs, using 77.8% less memory than their 16-bit counterparts. Tether claims the technology can reduce cloud dependency, enabling decentralized AI training.
Source:Show original
Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information.
Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.