Tether's BrainWhisperer Achieves 98.3% Accuracy in Decoding Brain Signals

iconTechFlow
Share
Share IconShare IconShare IconShare IconShare IconShare IconCopy
AI summary iconSummary

expand icon
Tether’s BrainWhisperer project announcement revealed a 98.3% accuracy rate in decoding brain signals to text during its latest test. The system ranked fourth in the Brain-to-Text '25 Kaggle competition with a 1.78% WER. Built on the OpenAI Whisper model and utilizing LoRA fine-tuning, it processes cortical signals through a multi-model pipeline. Tether is also advancing cross-individual decoding and non-invasive BCI devices. The company recently released Brain OS, an open-source brain operating system based on the QVAC platform. Interest rate developments remain a key focus for investors monitoring Tether’s broader progress.

According to TechCrunch, in the latest testing of Tether’s BrainWhisperer project, the accuracy of converting brain signals into text reached 98.3%, ranking fourth among 466 teams in the Brain-to-Text '25 Kaggle competition with a 1.78% WER. The system is built on OpenAI’s Whisper model, combined with LoRA fine-tuning, and decodes cortical electrical signals into text through a multi-model integration pipeline. Tether is also advancing cross-individual signal decoding frameworks and non-invasive BCI device development, and has released Brain OS, an open-source brain operating system based on the QVAC platform.

Disclaimer: The information on this page may have been obtained from third parties and does not necessarily reflect the views or opinions of KuCoin. This content is provided for general informational purposes only, without any representation or warranty of any kind, nor shall it be construed as financial or investment advice. KuCoin shall not be liable for any errors or omissions, or for any outcomes resulting from the use of this information. Investments in digital assets can be risky. Please carefully evaluate the risks of a product and your risk tolerance based on your own financial circumstances. For more information, please refer to our Terms of Use and Risk Disclosure.