text stringlengths 5 58.6k | source stringclasses 470
values | url stringlengths 49 167 | source_section stringlengths 0 90 | file_type stringclasses 1
value | id stringlengths 3 6 |
|---|---|---|---|---|---|
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agr... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md | https://huggingface.co/docs/transformers/en/preprocessing/ | .md | 0_0 | |
[[open-in-colab]]
Before you can train a model on a dataset, it needs to be preprocessed into the expected model input format. Whether your data is text, images, or audio, it needs to be converted and assembled into batches of tensors. 🤗 Transformers provides a set of preprocessing classes to help prepare your data ... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md | https://huggingface.co/docs/transformers/en/preprocessing/#preprocess | #preprocess | .md | 0_1 |
<Youtube id="Yffk5aydLzg"/>
The main tool for preprocessing textual data is a [tokenizer](main_classes/tokenizer). A tokenizer splits text into *tokens* according to a set of rules. The tokens are converted into numbers and then tensors, which become the model inputs. Any additional inputs required by the model are a... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md | https://huggingface.co/docs/transformers/en/preprocessing/#natural-language-processing | #natural-language-processing | .md | 0_2 |
Sentences aren't always the same length which can be an issue because tensors, the model inputs, need to have a uniform shape. Padding is a strategy for ensuring tensors are rectangular by adding a special *padding token* to shorter sentences.
Set the `padding` parameter to `True` to pad the shorter sequences in the ... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md | https://huggingface.co/docs/transformers/en/preprocessing/#pad | #pad | .md | 0_3 |
On the other end of the spectrum, sometimes a sequence may be too long for a model to handle. In this case, you'll need to truncate the sequence to a shorter length.
Set the `truncation` parameter to `True` to truncate a sequence to the maximum length accepted by the model:
```py
>>> batch_sentences = [
... "Bu... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md | https://huggingface.co/docs/transformers/en/preprocessing/#truncation | #truncation | .md | 0_4 |
Finally, you want the tokenizer to return the actual tensors that get fed to the model.
Set the `return_tensors` parameter to either `pt` for PyTorch, or `tf` for TensorFlow:
<frameworkcontent>
<pt>
```py
>>> batch_sentences = [
... "But what about second breakfast?",
... "Don't think he knows about secon... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md | https://huggingface.co/docs/transformers/en/preprocessing/#build-tensors | #build-tensors | .md | 0_5 |
For audio tasks, you'll need a [feature extractor](main_classes/feature_extractor) to prepare your dataset for the model. The feature extractor is designed to extract features from raw audio data, and convert them into tensors.
Load the [MInDS-14](https://huggingface.co/datasets/PolyAI/minds14) dataset (see the 🤗 [D... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md | https://huggingface.co/docs/transformers/en/preprocessing/#audio | #audio | .md | 0_6 |
For computer vision tasks, you'll need an [image processor](main_classes/image_processor) to prepare your dataset for the model.
Image preprocessing consists of several steps that convert images into the input expected by the model. These steps
include but are not limited to resizing, normalizing, color channel correct... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md | https://huggingface.co/docs/transformers/en/preprocessing/#computer-vision | #computer-vision | .md | 0_7 |
In some cases, for instance, when fine-tuning [DETR](./model_doc/detr), the model applies scale augmentation at training
time. This may cause images to be different sizes in a batch. You can use [`DetrImageProcessor.pad`]
from [`DetrImageProcessor`] and define a custom `collate_fn` to batch images together.
```py
>>>... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md | https://huggingface.co/docs/transformers/en/preprocessing/#pad | #pad | .md | 0_8 |
For tasks involving multimodal inputs, you'll need a [processor](main_classes/processors) to prepare your dataset for the model. A processor couples together two processing objects such as tokenizer and feature extractor.
Load the [LJ Speech](https://huggingface.co/datasets/lj_speech) dataset (see the 🤗 [Datasets tu... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/preprocessing.md | https://huggingface.co/docs/transformers/en/preprocessing/#multimodal | #multimodal | .md | 0_9 |
<!---
Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or a... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/sagemaker.md | https://huggingface.co/docs/transformers/en/sagemaker/ | .md | 1_0 | |
The documentation has been moved to [hf.co/docs/sagemaker](https://huggingface.co/docs/sagemaker). This page will be removed in `transformers` 5.0. | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/sagemaker.md | https://huggingface.co/docs/transformers/en/sagemaker/#run-training-on-amazon-sagemaker | #run-training-on-amazon-sagemaker | .md | 1_1 |
- [Train Hugging Face models on Amazon SageMaker with the SageMaker Python SDK](https://huggingface.co/docs/sagemaker/train)
- [Deploy Hugging Face models to Amazon SageMaker with the SageMaker Python SDK](https://huggingface.co/docs/sagemaker/inference) | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/sagemaker.md | https://huggingface.co/docs/transformers/en/sagemaker/#table-of-contents | #table-of-contents | .md | 1_2 |
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agr... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md | https://huggingface.co/docs/transformers/en/model_sharing/ | .md | 2_0 | |
The last two tutorials showed how you can fine-tune a model with PyTorch, Keras, and 🤗 Accelerate for distributed setups. The next step is to share your model with the community! At Hugging Face, we believe in openly sharing knowledge and resources to democratize artificial intelligence for everyone. We encourage you ... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md | https://huggingface.co/docs/transformers/en/model_sharing/#share-a-model | #share-a-model | .md | 2_1 |
Each repository on the Model Hub behaves like a typical GitHub repository. Our repositories offer versioning, commit history, and the ability to visualize differences.
The Model Hub's built-in versioning is based on git and [git-lfs](https://git-lfs.github.com/). In other words, you can treat one model as one reposit... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md | https://huggingface.co/docs/transformers/en/model_sharing/#repository-features | #repository-features | .md | 2_2 |
Before sharing a model to the Hub, you will need your Hugging Face credentials. If you have access to a terminal, run the following command in the virtual environment where 🤗 Transformers is installed. This will store your access token in your Hugging Face cache folder (`~/.cache/` by default):
```bash
huggingface-c... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md | https://huggingface.co/docs/transformers/en/model_sharing/#setup | #setup | .md | 2_3 |
To ensure your model can be used by someone working with a different framework, we recommend you convert and upload your model with both PyTorch and TensorFlow checkpoints. While users are still able to load your model from a different framework if you skip this step, it will be slower because 🤗 Transformers will need... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md | https://huggingface.co/docs/transformers/en/model_sharing/#convert-a-model-for-all-frameworks | #convert-a-model-for-all-frameworks | .md | 2_4 |
<frameworkcontent>
<pt>
<Youtube id="Z1-XMy-GNLQ"/>
Sharing a model to the Hub is as simple as adding an extra parameter or callback. Remember from the [fine-tuning tutorial](training), the [`TrainingArguments`] class is where you specify hyperparameters and additional training options. One of these training options ... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md | https://huggingface.co/docs/transformers/en/model_sharing/#push-a-model-during-training | #push-a-model-during-training | .md | 2_5 |
You can also call `push_to_hub` directly on your model to upload it to the Hub.
Specify your model name in `push_to_hub`:
```py
>>> pt_model.push_to_hub("my-awesome-model")
```
This creates a repository under your username with the model name `my-awesome-model`. Users can now load your model with the `from_pretra... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md | https://huggingface.co/docs/transformers/en/model_sharing/#use-the-pushtohub-function | #use-the-pushtohub-function | .md | 2_6 |
Users who prefer a no-code approach are able to upload a model through the Hub's web interface. Visit [huggingface.co/new](https://huggingface.co/new) to create a new repository:

From here, add some i... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md | https://huggingface.co/docs/transformers/en/model_sharing/#upload-with-the-web-interface | #upload-with-the-web-interface | .md | 2_7 |
To make sure users understand your model's capabilities, limitations, potential biases and ethical considerations, please add a model card to your repository. The model card is defined in the `README.md` file. You can add a model card by:
* Manually creating and uploading a `README.md` file.
* Clicking on the **Edit ... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/model_sharing.md | https://huggingface.co/docs/transformers/en/model_sharing/#add-a-model-card | #add-a-model-card | .md | 2_8 |
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agr... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md | https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/ | .md | 3_0 | |
GPUs are the standard choice of hardware for machine learning, unlike CPUs, because they are optimized for memory bandwidth and parallelism. To keep up with the larger sizes of modern models or to run these large models on existing and older hardware, there are several optimizations you can use to speed up GPU inferenc... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md | https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#gpu-inference | #gpu-inference | .md | 3_1 |
<Tip>
FlashAttention-2 is experimental and may change considerably in future versions.
</Tip>
[FlashAttention-2](https://huggingface.co/papers/2205.14135) is a faster and more efficient implementation of the standard attention mechanism that can significantly speedup inference by:
1. additionally parallelizing ... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md | https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#flashattention-2 | #flashattention-2 | .md | 3_2 |
You can benefit from considerable speedups for inference, especially for inputs with long sequences. However, since FlashAttention-2 does not support computing attention scores with padding tokens, you must manually pad/unpad the attention scores for batched inference when the sequence contains padding tokens. This lea... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md | https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#expected-speedups | #expected-speedups | .md | 3_3 |
PyTorch's [`torch.nn.functional.scaled_dot_product_attention`](https://pytorch.org/docs/master/generated/torch.nn.functional.scaled_dot_product_attention.html) (SDPA) can also call FlashAttention and memory-efficient attention kernels under the hood. SDPA support is currently being added natively in Transformers and is... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md | https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#pytorch-scaled-dot-product-attention | #pytorch-scaled-dot-product-attention | .md | 3_4 |
<Tip warning={true}>
Some BetterTransformer features are being upstreamed to Transformers with default support for native `torch.nn.scaled_dot_product_attention`. BetterTransformer still has a wider coverage than the Transformers SDPA integration, but you can expect more and more architectures to natively support SDP... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md | https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#bettertransformer | #bettertransformer | .md | 3_5 |
bitsandbytes is a quantization library that includes support for 4-bit and 8-bit quantization. Quantization reduces your model size compared to its native full precision version, making it easier to fit large models onto GPUs with limited memory.
Make sure you have bitsandbytes and 🤗 Accelerate installed:
```bash
... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md | https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#bitsandbytes | #bitsandbytes | .md | 3_6 |
To load a model in 4-bit for inference, use the `load_in_4bit` parameter. The `device_map` parameter is optional, but we recommend setting it to `"auto"` to allow 🤗 Accelerate to automatically and efficiently allocate the model given the available resources in the environment.
```py
from transformers import AutoMode... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md | https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#4-bit | #4-bit | .md | 3_7 |
<Tip>
If you're curious and interested in learning more about the concepts underlying 8-bit quantization, read the [Gentle Introduction to 8-bit Matrix Multiplication for transformers at scale using Hugging Face Transformers, Accelerate and bitsandbytes](https://huggingface.co/blog/hf-bitsandbytes-integration) blog p... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md | https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#8-bit | #8-bit | .md | 3_8 |
<Tip>
Learn more details about using ORT with 🤗 Optimum in the [Accelerated inference on NVIDIA GPUs](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/gpu#accelerated-inference-on-nvidia-gpus) and [Accelerated inference on AMD GPUs](https://huggingface.co/docs/optimum/onnxruntime/usage_guides/amdgpu#acce... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md | https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#-optimum | #-optimum | .md | 3_9 |
It is often possible to combine several of the optimization techniques described above to get the best inference performance possible for your model. For example, you can load a model in 4-bit, and then enable BetterTransformer with FlashAttention:
```py
import torch
from torch.nn.attention import SDPBackend, sdpa_ke... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_infer_gpu_one.md | https://huggingface.co/docs/transformers/en/perf_infer_gpu_one/#combine-optimizations | #combine-optimizations | .md | 3_10 |
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agr... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_cpu_many/ | .md | 4_0 | |
When training on a single CPU is too slow, we can use multiple CPUs. This guide focuses on PyTorch-based DDP enabling
distributed CPU training efficiently on [bare metal](#usage-in-trainer) and [Kubernetes](#usage-with-kubernetes). | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#efficient-training-on-multiple-cpus | #efficient-training-on-multiple-cpus | .md | 4_1 |
[Intel® oneCCL](https://github.com/oneapi-src/oneCCL) (collective communications library) is a library for efficient distributed deep learning training implementing such collectives like allreduce, allgather, alltoall. For more information on oneCCL, please refer to the [oneCCL documentation](https://spec.oneapi.com/ve... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#intel-oneccl-bindings-for-pytorch | #intel-oneccl-bindings-for-pytorch | .md | 4_2 |
Wheel files are available for the following Python versions:
| Extension Version | Python 3.7 | Python 3.8 | Python 3.9 | Python 3.10 | Python 3.11 |
| :---------------: | :--------: | :--------: | :--------: | :---------: | :---------: |
| 2.5.0 | | √ | √ | √ | √ ... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#intel-oneccl-bindings-for-pytorch-installation | #intel-oneccl-bindings-for-pytorch-installation | .md | 4_3 |
Use this standards-based MPI implementation to deliver flexible, efficient, scalable cluster messaging on Intel® architecture. This component is part of the Intel® oneAPI HPC Toolkit.
oneccl_bindings_for_pytorch is installed along with the MPI tool set. Need to source the environment before using it.
```bash
oneccl... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#intel-mpi-library | #intel-mpi-library | .md | 4_4 |
Intel Extension for PyTorch (IPEX) provides performance optimizations for CPU training with both Float32 and BFloat16 (refer to the [single CPU section](./perf_train_cpu) to learn more).
The following "Usage in Trainer" takes mpirun in Intel® MPI library as an example. | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#intel-extension-for-pytorch-installation | #intel-extension-for-pytorch-installation | .md | 4_5 |
To enable multi CPU distributed training in the Trainer with the ccl backend, users should add **`--ddp_backend ccl`** in the command arguments.
Let's see an example with the [question-answering example](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering)
The following command... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#usage-in-trainer | #usage-in-trainer | .md | 4_6 |
The same distributed training job from the previous section can be deployed to a Kubernetes cluster using the
[Kubeflow PyTorchJob training operator](https://www.kubeflow.org/docs/components/training/user-guides/pytorch). | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#usage-with-kubernetes | #usage-with-kubernetes | .md | 4_7 |
This example assumes that you have:
* Access to a Kubernetes cluster with [Kubeflow installed](https://www.kubeflow.org/docs/started/installing-kubeflow)
* [`kubectl`](https://kubernetes.io/docs/tasks/tools) installed and configured to access the Kubernetes cluster
* A [Persistent Volume Claim (PVC)](https://kubernetes... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#setup | #setup | .md | 4_8 |
The [Kubeflow PyTorchJob](https://www.kubeflow.org/docs/components/training/user-guides/pytorch) is used to run the distributed
training job on the cluster. The yaml file for the PyTorchJob defines parameters such as:
* The name of the PyTorchJob
* The number of replicas (workers)
* The python script and it's parameter... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#pytorchjob-specification-file | #pytorchjob-specification-file | .md | 4_9 |
After the PyTorchJob spec has been updated with values appropriate for your cluster and training job, it can be deployed
to the cluster using:
```bash
export NAMESPACE=<specify your namespace>
kubectl create -f pytorchjob.yaml -n ${NAMESPACE}
```
The `kubectl get pods -n ${NAMESPACE}` command can then be used to lis... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#deploy | #deploy | .md | 4_10 |
This guide covered running distributed PyTorch training jobs using multiple CPUs on bare metal and on a Kubernetes
cluster. Both cases utilize Intel Extension for PyTorch and Intel oneCCL Bindings for PyTorch for optimal training
performance, and can be used as a template to run your own workload on multiple nodes. | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/perf_train_cpu_many.md | https://huggingface.co/docs/transformers/en/perf_train_cpu_many/#summary | #summary | .md | 4_11 |
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agr... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md | https://huggingface.co/docs/transformers/en/pad_truncation/ | .md | 5_0 | |
Batched inputs are often different lengths, so they can't be converted to fixed-size tensors. Padding and truncation are strategies for dealing with this problem, to create rectangular tensors from batches of varying lengths. Padding adds a special **padding token** to ensure shorter sequences will have the same length... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/pad_truncation.md | https://huggingface.co/docs/transformers/en/pad_truncation/#padding-and-truncation | #padding-and-truncation | .md | 5_1 |
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agr... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/ | .md | 6_0 | |
Efficient caching is crucial for optimizing the performance of models in various generative tasks,
including text generation, translation, summarization and other transformer-based applications.
Effective caching helps reduce computation time and improve response rates, especially in real-time or resource-intensive app... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#best-practices-for-generation-with-cache | #best-practices-for-generation-with-cache | .md | 6_1 |
Imagine you’re having a conversation with someone, and instead of remembering what was said previously, you have to start from scratch every time you respond. This would be slow and inefficient, right? In the world of Transformer models, a similar concept applies, and that's where Caching keys and values come into play... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#what-is-cache-and-why-we-should-care | #what-is-cache-and-why-we-should-care | .md | 6_2 |
When utilizing a cache object in the input, the Attention module performs several critical steps to integrate past and present information seamlessly.
The Attention module concatenates the current key-values with the past key-values stored in the cache. This results in attention weights of shape `(new_tokens_length, ... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#under-the-hood-how-cache-object-works-in-attention-mechanism | #under-the-hood-how-cache-object-works-in-attention-mechanism | .md | 6_3 |
In 🤗 Transformers, we support various Cache types to optimize the performance across different models and tasks. By default, all models generate with caching,
with the [`~DynamicCache`] class being the default cache for most models. It allows us to dynamically grow cache size, by saving more and more keys and values a... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#generate-with-cache | #generate-with-cache | .md | 6_4 |
The key and value cache can occupy a large portion of memory, becoming a [bottleneck for long-context generation](https://huggingface.co/blog/llama31#inference-memory-requirements), especially for Large Language Models.
Quantizing the cache when using `generate()` can significantly reduce memory requirements at the cos... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#quantized-cache | #quantized-cache | .md | 6_5 |
Similarly to KV cache quantization, [`~OffloadedCache`] strategy aims to reduce GPU VRAM usage.
It does so by moving the KV cache for most layers to the CPU.
As the model's `forward()` method iterates over the layers, this strategy maintains the current layer cache on the GPU.
At the same time it asynchronously prefetc... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#offloaded-cache | #offloaded-cache | .md | 6_6 |
Since the "DynamicCache" dynamically grows with each generation step, it prevents you from taking advantage of JIT optimizations. The [`~StaticCache`] pre-allocates
a specific maximum size for the keys and values, allowing you to generate up to the maximum length without having to modify cache size. Check the below usa... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#static-cache | #static-cache | .md | 6_7 |
Like [`~OffloadedCache`] exists for offloading a "DynamicCache", there is also an offloaded static cache. It fully supports
JIT optimizations. Just pass `cache_implementation="offloaded_static"` in the `generation_config` or directly to the `generate()` call.
This will use the [`~OffloadedStaticCache`] implementation i... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#offloaded-static-cache | #offloaded-static-cache | .md | 6_8 |
As the name suggests, this cache type implements a sliding window over previous keys and values, retaining only the last `sliding_window` tokens. It should be used with models like Mistral that support sliding window attention. Additionally, similar to Static Cache, this one is JIT-friendly and can be used with the sam... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#sliding-window-cache | #sliding-window-cache | .md | 6_9 |
Sink Cache was introduced in ["Efficient Streaming Language Models with Attention Sinks"](https://arxiv.org/abs/2309.17453). It allows you to generate long sequences of text ("infinite length" according to the paper) without any fine-tuning. That is achieved by smart handling of previous keys and values, specifically i... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#sink-cache | #sink-cache | .md | 6_10 |
The [`~EncoderDecoderCache`] is a wrapper designed to handle the caching needs of encoder-decoder models. This cache type is specifically built to manage both self-attention and cross-attention caches, ensuring storage and retrieval of past key/values required for these complex models. Cool thing about Encoder-Decoder ... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#encoder-decoder-cache | #encoder-decoder-cache | .md | 6_11 |
Some models require storing previous keys, values, or states in a specific way, and the above cache classes cannot be used. For such cases, we have several specialized cache classes that are designed for specific models. These models only accept their own dedicated cache classes and do not support using any other cache... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#model-specific-cache-classes | #model-specific-cache-classes | .md | 6_12 |
We have seen how to use each of the cache types when generating. What if you want to use cache in iterative generation setting, for example in applications like chatbots, where interactions involve multiple turns and continuous back-and-forth exchanges. Iterative generation with cache allows these systems to handle ong... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#iterative-generation-with-cache | #iterative-generation-with-cache | .md | 6_13 |
Sometimes you would want to first fill-in cache object with key/values for certain prefix prompt and re-use it several times to generate different sequences from it. In that case you can construct a `Cache` object that will hold the instruction prompt, and re-use it several times with different text sequences.
```pyt... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#re-use-cache-to-continue-generation | #re-use-cache-to-continue-generation | .md | 6_14 |
Prior to the introduction of the `Cache` object, the cache of LLMs used to be a tuple of tuples of tensors. The legacy
format has a dynamic size, growing as we generate text -- very similar to `DynamicCache`. If your project depend on
this legacy format, you can seamlessly convert it to a `DynamicCache` and back.
```... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/kv_cache.md | https://huggingface.co/docs/transformers/en/kv_cache/#legacy-cache-format | #legacy-cache-format | .md | 6_15 |
<!--Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agr... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/ | .md | 7_0 | |
As models get bigger, parallelism has emerged as a strategy for training larger models on limited hardware and accelerating training speed by several orders of magnitude. At Hugging Face, we created the [🤗 Accelerate](https://huggingface.co/docs/accelerate) library to help users easily train a 🤗 Transformers model on... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#distributed-training-with--accelerate | #distributed-training-with--accelerate | .md | 7_1 |
Get started by installing 🤗 Accelerate:
```bash
pip install accelerate
```
Then import and create an [`~accelerate.Accelerator`] object. The [`~accelerate.Accelerator`] will automatically detect your type of distributed setup and initialize all the necessary components for training. You don't need to explicitly pl... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#setup | #setup | .md | 7_2 |
The next step is to pass all the relevant training objects to the [`~accelerate.Accelerator.prepare`] method. This includes your training and evaluation DataLoaders, a model and an optimizer:
```py
>>> train_dataloader, eval_dataloader, model, optimizer = accelerator.prepare(
... train_dataloader, eval_dataloader... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#prepare-to-accelerate | #prepare-to-accelerate | .md | 7_3 |
The last addition is to replace the typical `loss.backward()` in your training loop with 🤗 Accelerate's [`~accelerate.Accelerator.backward`] method:
```py
>>> for epoch in range(num_epochs):
... for batch in train_dataloader:
... outputs = model(**batch)
... loss = outputs.loss
... accele... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#backward | #backward | .md | 7_4 |
Once you've added the relevant lines of code, launch your training in a script or a notebook like Colaboratory. | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#train | #train | .md | 7_5 |
If you are running your training from a script, run the following command to create and save a configuration file:
```bash
accelerate config
```
Then launch your training with:
```bash
accelerate launch train.py
``` | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#train-with-a-script | #train-with-a-script | .md | 7_6 |
🤗 Accelerate can also run in a notebook if you're planning on using Colaboratory's TPUs. Wrap all the code responsible for training in a function, and pass it to [`~accelerate.notebook_launcher`]:
```py
>>> from accelerate import notebook_launcher
>>> notebook_launcher(training_function)
```
For more information ... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/accelerate.md | https://huggingface.co/docs/transformers/en/accelerate/#train-with-a-notebook | #train-with-a-notebook | .md | 7_7 |
<!--⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
--> | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/ | .md | 8_0 | |
This page regroups resources around 🤗 Transformers developed by the community. | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community | #community | .md | 8_1 |
| Resource | Description | Author |
|:----------|:-------------|------:|
| [Hugging Face Transformers Glossary Flashcards](https://www.darigovresearch.com/huggingface-transformers-glossary-flashcards) | A set of flashcards based on the [Transformers Docs Glossary](glossary) that has been put int... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-resources | #community-resources | .md | 8_2 |
| Notebook | Description | Author | |
|:----------|:-------------|:-------------|------:|
| [Fine-tune a pre-trained Transformer to generate lyrics](https://github.com/AlekseyKorshuk/huggingartists) | How to generate lyrics in the style of your favorite artist by fine-tuning a GPT-2 model |... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/community.md | https://huggingface.co/docs/transformers/en/community/#community-notebooks | #community-notebooks | .md | 8_3 |
<!---
Copyright 2022 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or a... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/ | .md | 9_0 | |
Sometimes errors occur, but we are here to help! This guide covers some of the most common issues we've seen and how you can resolve them. However, this guide isn't meant to be a comprehensive collection of every 🤗 Transformers issue. For more help with troubleshooting your issue, try:
<Youtube id="S2EEG3JIt2A"/>
... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#troubleshoot | #troubleshoot | .md | 9_1 |
Some GPU instances on cloud and intranet setups are firewalled to external connections, resulting in a connection error. When your script attempts to download model weights or datasets, the download will hang and then timeout with the following message:
```
ValueError: Connection error, and we cannot find the request... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#firewalled-environments | #firewalled-environments | .md | 9_2 |
Training large models with millions of parameters can be challenging without the appropriate hardware. A common error you may encounter when the GPU runs out of memory is:
```
CUDA out of memory. Tried to allocate 256.00 MiB (GPU 0; 11.17 GiB total capacity; 9.70 GiB already allocated; 179.81 MiB free; 9.85 GiB reser... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#cuda-out-of-memory | #cuda-out-of-memory | .md | 9_3 |
TensorFlow's [model.save](https://www.tensorflow.org/tutorials/keras/save_and_load#save_the_entire_model) method will save the entire model - architecture, weights, training configuration - in a single file. However, when you load the model file again, you may run into an error because 🤗 Transformers may not load all ... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#unable-to-load-a-saved-tensorflow-model | #unable-to-load-a-saved-tensorflow-model | .md | 9_4 |
Another common error you may encounter, especially if it is a newly released model, is `ImportError`:
```
ImportError: cannot import name 'ImageGPTImageProcessor' from 'transformers' (unknown location)
```
For these error types, check to make sure you have the latest version of 🤗 Transformers installed to access t... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#importerror | #importerror | .md | 9_5 |
Sometimes you may run into a generic CUDA error about an error in the device code.
```
RuntimeError: CUDA error: device-side assert triggered
```
You should try to run the code on a CPU first to get a more descriptive error message. Add the following environment variable to the beginning of your code to switch to a... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#cuda-error-device-side-assert-triggered | #cuda-error-device-side-assert-triggered | .md | 9_6 |
In some cases, the output `hidden_state` may be incorrect if the `input_ids` include padding tokens. To demonstrate, load a model and tokenizer. You can access a model's `pad_token_id` to see its value. The `pad_token_id` may be `None` for some models, but you can always manually set it.
```py
>>> from transformers i... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#incorrect-output-when-padding-tokens-arent-masked | #incorrect-output-when-padding-tokens-arent-masked | .md | 9_7 |
Generally, we recommend using the [`AutoModel`] class to load pretrained instances of models. This class
can automatically infer and load the correct architecture from a given checkpoint based on the configuration. If you see
this `ValueError` when loading a model from a checkpoint, this means the Auto class couldn't f... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/troubleshooting.md | https://huggingface.co/docs/transformers/en/troubleshooting/#valueerror-unrecognized-configuration-class-xyz-for-this-kind-of-automodel | #valueerror-unrecognized-configuration-class-xyz-for-this-kind-of-automodel | .md | 9_8 |
<!--Copyright 2020 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agr... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/ | .md | 10_0 | |
Deploying 🤗 Transformers models in production environments often requires, or can benefit from exporting the models into
a serialized format that can be loaded and executed on specialized runtimes and hardware.
🤗 Optimum is an extension of Transformers that enables exporting models from PyTorch or TensorFlow to ser... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#export-to-onnx | #export-to-onnx | .md | 10_1 |
[ONNX (Open Neural Network eXchange)](http://onnx.ai) is an open standard that defines a common set of operators and a
common file format to represent deep learning models in a wide variety of frameworks, including PyTorch and
TensorFlow. When a model is exported to the ONNX format, these operators are used to
construc... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#export-to-onnx | #export-to-onnx | .md | 10_2 |
To export a 🤗 Transformers model to ONNX, first install an extra dependency:
```bash
pip install optimum[exporters]
```
To check out all available arguments, refer to the [🤗 Optimum docs](https://huggingface.co/docs/optimum/exporters/onnx/usage_guides/export_a_model#exporting-a-model-to-onnx-using-the-cli),
or vi... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a--transformers-model-to-onnx-with-cli | #exporting-a--transformers-model-to-onnx-with-cli | .md | 10_3 |
Alternative to CLI, you can export a 🤗 Transformers model to ONNX programmatically like so:
```python
>>> from optimum.onnxruntime import ORTModelForSequenceClassification
>>> from transformers import AutoTokenizer
>>> model_checkpoint = "distilbert_base_uncased_squad"
>>> save_directory = "onnx/"
>>> # Load a mod... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a--transformers-model-to-onnx-with-optimumonnxruntime | #exporting-a--transformers-model-to-onnx-with-optimumonnxruntime | .md | 10_4 |
If you wish to contribute by adding support for a model that cannot be currently exported, you should first check if it is
supported in [`optimum.exporters.onnx`](https://huggingface.co/docs/optimum/exporters/onnx/overview),
and if it is not, [contribute to 🤗 Optimum](https://huggingface.co/docs/optimum/exporters/onnx... | /Users/nielsrogge/Documents/python_projecten/transformers/docs/source/en/serialization.md | https://huggingface.co/docs/transformers/en/serialization/#exporting-a-model-for-an-unsupported-architecture | #exporting-a-model-for-an-unsupported-architecture | .md | 10_5 |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 5