Research
February 16, 2026
10 min read

Efficient On-Device TTS: Compressing Qwen3 TTS for Apple Silicon

Five orthogonal compression techniques reduce Qwen3 TTS from 2.35 GB to 808 MB (67% reduction) while preserving audio quality, enabling real-time speech synthesis on edge devices.

View on GitHub →
67%
Size Reduction
59%
Memory Reduction
0.68x
Real-time Factor
808 MB
Final Model Size

Abstract

We present a comprehensive post-training compression pipeline for deploying the Qwen3 TTS 0.6B speech synthesis model on edge devices with Apple Silicon. Our approach combines five orthogonal, stackable techniques — vocabulary pruning, speech tokenizer pruning, 4-bit weight quantization, MLP neuron pruning, and transformer layer pruning — to reduce total model size from 2.35 GB to 808 MB (67% reduction) while preserving perceptually equivalent audio quality.

Central to our approach is a novel token map indirection scheme that reduces the text embedding matrix from 622 MB to 194 MB without retraining the tokenizer or modifying the model architecture. We implement the full inference pipeline natively in Swift using Apple's MLX framework, achieving faster-than-real-time synthesis (~0.8x RTF) with peak memory under 2.0 GB.

Model Architecture

Qwen3 TTS 0.6B follows a codec-based speech synthesis paradigm with three main components:

Talker

28-layer Transformer with hidden=1024, 16 heads (GQA 8 KV), M-RoPE, SwiGLU MLP

CodePredictor

5-layer Transformer with 16 codebook heads, QK-Norm with RMSNorm

SpeechTokenizer

Conv Decoder + Split-RVQ: 1 semantic + 15 acoustic codebooks, 12.5 Hz, 24kHz output

The storage breakdown of the original bf16 model reveals the key targets for compression: the text embedding matrix accounts for 34.4% (622 MB) due to Qwen3's full 151K multilingual vocabulary, MLP layers take 34.4% (623 MB), and attention layers occupy 22.9% (415 MB).

Compression Pipeline

Five orthogonal techniques, each targeting a distinct source of redundancy. They compose without interference and can be applied in any order.

1

Vocabulary Pruning

-428 MB

151K → 47K tokens via token map indirection. Lossless.

2

ST Encoder Stripping

-225 MB

Remove unused encoder (voice cloning only). Lossless.

3

FP32 → FP16

-228 MB

Speech tokenizer decoder. max|w| < 36, safe for fp16.

4

4-bit Quantization

-805 MB

249 linear layers quantized. Embeddings kept in bf16.

Token Map Indirection

The text embedding matrix [151,936 x 2,048] inherits Qwen3's full multilingual vocabulary, but TTS only uses ~47K tokens. Instead of retraining the tokenizer, we use a simple integer mapping array:

embed(t) = E'[m[t]]

This is mathematically lossless — every preserved embedding row is an exact copy from the original matrix.

BPE Space-Prefix Insight

A critical finding: BPE tokenizers produce different tokens for the same word depending on context. Omitting space-prefixed variants causes mid-sentence words to map to zero vectors, triggering premature EOS.

encode("my")  = [2408]   # sentence-initial
encode(" my") = [847]    # mid-sentence (different token!)

Including both variants: 20K → 47K tokens (still only 31% of original 152K vocabulary).

Results

Model Size Comparison

ConfigurationMain ModelSpeech Tok.TotalReduction
Original (bf16)1,812 MB682 MB2,494 MB
+ Vocab pruning1,384 MB682 MB2,066 MB17.2%
+ ST pruning1,384 MB229 MB1,613 MB35.3%
+ 4-bit quantization579 MB229 MB808 MB67.6%

Inference Performance (Apple Silicon)

ConfigurationDisk (MB)Peak Mem (GB)Load (s)RTF
Original bf162,4945.142.740.70
Original 4-bit1,6114.662.730.74
Pruned bf161,6132.812.580.66
Pruned 4-bit8082.132.500.68

Quality Assessment

Vocabulary pruningLossless
ST pruning (fp16 + encoder strip)Quasi-lossless
4-bit quantizationNear-identical
MLP neuron pruningNear-identical
Layer pruning (-3 layers)Minor degradation

Swift Inference Engine

The complete Qwen3 TTS pipeline is implemented natively in Swift using Apple's MLX framework, with no Python dependencies.

Token Map Support

func embedText(_ ids: MLXArray) -> MLXArray {
    if let tokenMap = model.textTokenMap {
        return model.textEmbedding(tokenMap[ids])  // mapped lookup
    }
    return model.textEmbedding(ids)                // direct lookup
}

Generation Length Control

To prevent runaway generation under stochastic sampling (temperature = 0.9):

Tmax = min(Tconfig, max(75, 6 · |tokens(x)|))

Quick Start

git clone https://github.com/AtomGradient/swift-qwen3-tts.git
cd swift-qwen3-tts

swift run Qwen3TTSDemo \
  --model path/to/Qwen3-TTS-0.6B-CustomVoice-4bit-pruned-vocab-lite \
  --speaker Aiden \
  --text "Hello, this is on-device TTS!" \
  --output output.wav

Pre-built Models

We release two edge-optimized model variants, ready for on-device deployment:

bf16-pruned-vocab-lite

1.5 GB

Vocab pruning + ST lite

Lossless

4bit-pruned-vocab-lite

808 MB

+ 4-bit quantization

Near-identical

Both models support 9 speakers (Aiden, Serena, Vivian, Ryan, Uncle Fu, Ono Anna, Sohee, Eric, Dylan) across 12 languages with emotion control.

Citation

@article{atomgradient2026efficient,
  title={Efficient On-Device Text-to-Speech: A Post-Training
         Compression Pipeline for Qwen3 TTS on Apple Silicon},
  author={AtomGradient},
  year={2026},
  url={https://github.com/AtomGradient/swift-qwen3-tts}
}