NeuTTS Air ☁️
Q8 GGUF version, Q4 GGUF version
Created by Neuphonic - building faster, smaller, on-device voice AI
State-of-the-art Voice AI has been locked behind web APIs for too long. NeuTTS Air is the world’s first super-realistic, on-device, TTS speech language model with instant voice cloning. Built off a 0.5B LLM backbone, NeuTTS Air brings natural-sounding speech, real-time performance, built-in security and speaker cloning to your local device - unlocking a new category of embedded voice agents, assistants, toys, and compliance-safe apps.
Key Features
- 🗣Best-in-class realism for its size - produces natural, ultra-realistic voices that sound human
- 📱Optimised for on-device deployment - provided in GGML format, ready to run on phones, laptops, or even Raspberry Pis
- 👫Instant voice cloning - create your own speaker with as little as 3 seconds of audio
- 🚄Simple LM + codec architecture built off a 0.5B backbone - the sweet spot between speed, size, and quality for real-world applications
Websites like neutts.com are popping up and they're not affliated with Neuphonic, our github or this repo.
We are on neuphonic.com only. Please be careful out there! 🙏
Model Details
NeuTTS Air is built off Qwen 0.5B - a lightweight yet capable language model optimised for text understanding and generation - as well as a powerful combination of technologies designed for efficiency and quality:
- Audio Codec: NeuCodec - our proprietary neural audio codec that achieves exceptional audio quality at low bitrates using a single codebook
- Format: Available in GGML format for efficient on-device inference
- Responsibility: Watermarked outputs
- Inference Speed: Real-time generation on mid-range devices
- Power Consumption: Optimised for mobile and embedded devices
Get Started with NeuTTS
- Install System Dependencies (required):
espeak-ng
With
brewon macOS Ventura and later,aptin Ubuntu version 25 or Debian version 13, andchoco/wingeton Windows, install the latest version ofespeak-ngwith the commands below. If you have a different or older operating system, you may need to install from source: see the following link https://github.com/espeak-ng/espeak-ng/blob/master/docs/building.md
Please refer to the following link for instructions on how to install espeak-ng:
https://github.com/espeak-ng/espeak-ng/blob/master/docs/guide.md
# Mac OS
brew install espeak-ng
# Ubuntu/Debian
sudo apt install espeak-ng
# Windows install
# via chocolatey (https://community.chocolatey.org/packages?page=1&prerelease=False&moderatorQueue=False&tags=espeak)
choco install espeak-ng
# via winget
winget install -e --id eSpeak-NG.eSpeak-NG
# via msi (need to add to path or folow the "Windows users who installed via msi" below)
# find the msi at https://github.com/espeak-ng/espeak-ng/releases
Windows users who installed via msi / do not have their install on path need to run the following (see https://github.com/bootphon/phonemizer/issues/163)
$env:PHONEMIZER_ESPEAK_LIBRARY = "c:\Program Files\eSpeak NG\libespeak-ng.dll"
$env:PHONEMIZER_ESPEAK_PATH = "c:\Program Files\eSpeak NG"
setx PHONEMIZER_ESPEAK_LIBRARY "c:\Program Files\eSpeak NG\libespeak-ng.dll"
setx PHONEMIZER_ESPEAK_PATH "c:\Program Files\eSpeak NG"
Install NeuTTS
pip install neuttsOr for a local editable install, clone the neutts repository and run in the base folder:
pip install -e .Alternatively to install all dependencies, including
onnxruntimeandllama-cpp-python(equivalent to steps 3 and 4 below):pip install neutts[all]or for an editable install:
pip install -e .[all](Optional) Install
llama-cpp-pythonto use.ggufmodels.pip install "neutts[llama]"Note that this installs
llama-cpp-pythonwithout GPU support. To install with GPU support (e.g., CUDA, MPS) please refer to: https://pypi.org/project/llama-cpp-python/(Optional) Install
onnxruntimeto use the.onnxdecoder.pip install "neutts[onnx]"
Basic Example
Run the basic example script to synthesize speech:
python -m examples.basic_example \
--input_text "My name is Dave, and um, I'm from London" \
--ref_audio samples/dave.wav \
--ref_text samples/dave.txt
To specify a particular model repo for the backbone or codec, add the --backbone argument. Available backbones are listed in NeuTTS-Air huggingface collection.
Several examples are available, including a Jupyter notebook in the examples folder.
Simple One-Code Block Usage
from neutts import NeuTTS
import soundfile as sf
tts = NeuTTS(backbone_repo="neuphonic/neutts-air-q4-gguf", backbone_device="cpu", codec_repo="neuphonic/neucodec", codec_device="cpu")
input_text = "My name is Dave, and um, I'm from London."
ref_text = "samples/dave.txt"
ref_audio_path = "samples/dave.wav"
ref_text = open(ref_text, "r").read().strip()
ref_codes = tts.encode_reference(ref_audio_path)
wav = tts.infer(input_text, ref_codes, ref_text)
sf.write("test.wav", wav, 24000)
Tips
NeuTTS Air requires two inputs:
- A reference audio sample (
.wavfile) - A text string
The model then synthesises the text as speech in the style of the reference audio. This is what enables NeuTTS Air’s instant voice cloning capability.
Example Reference Files
You can find some ready-to-use samples in the examples folder:
samples/dave.wavsamples/jo.wav
Guidelines for Best Results
For optimal performance, reference audio samples should be:
- Mono channel
- 16-44 kHz sample rate
- 3–15 seconds in length
- Saved as a
.wavfile - Clean — minimal to no background noise
- Natural, continuous speech — like a monologue or conversation, with few pauses, so the model can capture tone effectively
Responsibility
Every audio file generated by NeuTTS Air includes **Perth (Perceptual Threshold) Watermarker.**
Disclaimer
Don't use this model to do bad things… please.
- Downloads last month
- 10,689
16-bit
