A new framework from Tether brings model training to smartphones and consumer chips, using efficiency gains to lower hardware and cost barriers. The post TetherA new framework from Tether brings model training to smartphones and consumer chips, using efficiency gains to lower hardware and cost barriers. The post Tether

Tether Unveils AI Framework to Train LLMs on Smartphones and Consumer Hardware

2026/03/18 12:57
2 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com
  • Tether has launched a framework that enables AI models to be trained directly on consumer devices rather than cloud systems.
  • The system uses BitNet and LoRA to reduce memory and compute demands, making on-device training more practical.
  • It supports a wide range of hardware and builds on Tether’s broader push into local, privacy-focused AI tools.

Tether has rolled out a new AI framework designed to bring large language model training onto consumer devices, including smartphones and a range of non-Nvidia GPUs. The system is part of its QVAC initiative, which centres on running and refining AI models locally rather than through cloud-based infrastructure.

The framework leverages Microsoft’s BitNet architecture together with LoRA techniques to reduce the computational load and memory requirements needed for model training. By using a 1-bit model structure, BitNet significantly cuts VRAM usage compared with traditional 16-bit approaches, allowing more efficient deployment on constrained hardware.

Related: xAI Recruits Wall Street Experts to Train Grok for Finance

Performance and Capacity

Tether reported that it successfully fine-tuned models with up to one billion parameters on smartphones in under two hours, with smaller models requiring only minutes. The system can also handle larger configurations, supporting models of up to 13 billion parameters on mobile devices.

The framework is compatible with a wide range of hardware, including chips from AMD, Intel and Apple, as well as mobile GPUs from Qualcomm and Apple, enabling both training and inference across different platforms. It additionally supports LoRA fine-tuning on non-Nvidia systems, extending functionality beyond the typical AI hardware stack.

This release builds on Tether’s ongoing development of QVAC, which has included tools for local model execution and fine-tuning across consumer hardware. The initiative reflects a broader effort to prioritise on-device AI processing, with an emphasis on reducing dependence on centralised cloud services.


Related: Crypto ATM Scams Hit $333M in the U.S. as AI Deepfakes Fuel Fraud

The post Tether Unveils AI Framework to Train LLMs on Smartphones and Consumer Hardware appeared first on Crypto News Australia.

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.