Transformers.js documentation
Use custom models
You are viewing main version, which requires installation from source. If you'd like
regular npm install, checkout the latest stable version (v3.8.1).
Use custom models
By default, Transformers.js uses hosted pretrained models and precompiled WASM binaries, which should work out-of-the-box. You can customize this as follows:
Settings
import { env } from '@huggingface/transformers';
// Specify a custom location for models (defaults to '/models/').
env.localModelPath = '/path/to/models/';
// Disable the loading of remote models from the Hugging Face Hub:
env.allowRemoteModels = false;
// Set location of .wasm files. Defaults to use a CDN.
env.backends.onnx.wasm.wasmPaths = '/path/to/files/';For a full list of available settings, check out the API Reference.
Convert your models to ONNX
We recommend using Optimum to convert your PyTorch models to ONNX in a single command. For the full list of supported architectures, check out the Optimum documentation.
Update on GitHub