Tether has announced the release of a new framework designed to enable the training and inference of large language models on consumer-grade hardware.
Tether’s QVAC Launches World’s First Cross-Platform BitNet LoRA Framework to Enable Billion-Parameter AI Training and Inference on Consumer GPUs and Smartphones
— Tether (@tether) March 17, 2026
Learn more: https://t.co/8ygOFzhfjn
The system, developed under its QVAC Fabric initiative, introduces what the company describes as the first cross-platform LoRA fine-tuning framework for Microsoft’s BitNet models, also known as 1-bit large language models.
The framework is intended to reduce the computational and memory requirements typically associated with developing and maintaining AI models.
Traditionally, such workloads have required high-performance NVIDIA GPUs or cloud-based infrastructure, limiting access to organizations with significant technical resources and capital.
According to Tether, the new framework allows users to fine-tune and run billion-parameter models across a range of consumer devices, including laptops, smartphones, and GPUs from multiple vendors such as Intel, AMD, and Apple Silicon.
The system is designed to support heterogeneous hardware environments beyond NVIDIA-based systems.
Tether reported successful fine-tuning of BitNet models on mobile GPUs, with smaller models trained in minutes and larger models in hours on smartphones.
The company stated that the framework improves memory efficiency and inference speed, enabling more advanced workloads on consumer hardware, and may reduce reliance on centralized infrastructure while supporting distributed training approaches such as federated learning.
The era of Stable Intelligence is here 🤖
— QVAC (@qvac) March 17, 2026
Tether’s QVAC Fabric just released the world’s first cross-platform 1-bit LLM LoRA fine-tuning framework. QVAC Fabric extends Microsoft's ultra-efficient BitNet architecture, allowing fine-tuning and inference of LLMs directly on your… pic.twitter.com/EjrSWVtCxn
Tether CEO Paolo Ardoino added that AI is expected to be a major force shaping society, and it should be accessible rather than controlled by a small group of providers.
Enabling AI training on everyday devices reduces reliance on centralized infrastructure, supports innovation, and allows for a more decentralized and inclusive system, with continued efforts to expand local, on-device AI capabilities.
Tether noted that additional technical materials, including research documentation, benchmarks, and implementation resources, have been made publicly available through the Hugging Face blog.