Tether has announced the release of a new framework designed to enable the training and inference of large language models on consumer-grade hardware. 

The system, developed under its QVAC Fabric initiative, introduces what the company describes as the first cross-platform LoRA fine-tuning framework for Microsoft’s BitNet models, also known as 1-bit large language models.

The framework is intended to reduce the computational and memory requirements typically associated with developing and maintaining AI models. 

Traditionally, such workloads have required high-performance NVIDIA GPUs or cloud-based infrastructure, limiting access to organizations with significant technical resources and capital.

According to Tether, the new framework allows users to fine-tune and run billion-parameter models across a range of consumer devices, including laptops, smartphones, and GPUs from multiple vendors such as Intel, AMD, and Apple Silicon. 

The system is designed to support heterogeneous hardware environments beyond NVIDIA-based systems. 

Tether reported successful fine-tuning of BitNet models on mobile GPUs, with smaller models trained in minutes and larger models in hours on smartphones. 

The company stated that the framework improves memory efficiency and inference speed, enabling more advanced workloads on consumer hardware, and may reduce reliance on centralized infrastructure while supporting distributed training approaches such as federated learning.

Tether CEO Paolo Ardoino added that AI is expected to be a major force shaping society, and it should be accessible rather than controlled by a small group of providers. 

Enabling AI training on everyday devices reduces reliance on centralized infrastructure, supports innovation, and allows for a more decentralized and inclusive system, with continued efforts to expand local, on-device AI capabilities.

Tether noted that additional technical materials, including research documentation, benchmarks, and implementation resources, have been made publicly available through the Hugging Face blog.

Share this article
The link has been copied!