title stringlengths 17 126 | author stringlengths 3 21 | date stringlengths 11 18 | local stringlengths 2 59 | tags stringlengths 2 76 | URL stringlengths 30 87 | content stringlengths 1.11k 108k |
|---|---|---|---|---|---|---|
How to train a new language model from scratch using Transformers and Tokenizers | julien-c | February 14, 2020 | how-to-train | guide, nlp | https://huggingface.co/blog/how-to-train | # How to train a new language model from scratch using Transformers and Tokenizers <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/01_how_to_train.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"> </a> Over the... |
How to generate text: using different decoding methods for language generation with Transformers | patrickvonplaten | March, 2020 | how-to-generate | guide, nlp | https://huggingface.co/blog/how-to-generate | # How to generate text: using different decoding methods for language generation with Transformers <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/02_how_to_generate.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Col... |
The Reformer - Pushing the limits of language modeling | patrickvonplaten | July 3, 2020 | reformer | research, nlp | https://huggingface.co/blog/reformer | # The Reformer - Pushing the limits of language modeling <a href="https://colab.research.google.com/github/patrickvonplaten/blog/blob/main/notebooks/03_reformer.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## How the Reformer uses less than 8... |
Block Sparse Matrices for Smaller and Faster Language Models | madlag | Sep 10, 2020 | pytorch_block_sparse | research, nlp | https://huggingface.co/blog/pytorch_block_sparse | # Block Sparse Matrices for Smaller and Faster Language Models ## Saving space and time, one zero at a time In previous [blog](https://medium.com/huggingface/is-the-future-of-neural-networks-sparse-an-introduction-1-n-d03923ecbd70) [posts](https://medium.com/huggingface/sparse-neural-networks-2-n-gpu-performance-b8... |
Transformer-based Encoder-Decoder Models | patrickvonplaten | October 10, 2020 | encoder-decoder | research, nlp | https://huggingface.co/blog/encoder-decoder | # Transformers-based Encoder-Decoder Models <a target="_blank" href="https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Encoder_Decoder_Model.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> # **Transformer-based Encoder-Decoder ... |
Hyperparameter Search with Transformers and Ray Tune | ray-project | November 2, 2020 | ray-tune | open-source-collab, nlp | https://huggingface.co/blog/ray-tune | # Hyperparameter Search with Transformers and Ray Tune ##### A guest blog post by Richard Liaw from the Anyscale team With cutting edge research implementations, thousands of trained models easily accessible, the Hugging Face [transformers](https://github.com/huggingface/transformers) library has become critical to... |
Porting fairseq wmt19 translation system to transformers | stas | November 3, 2020 | porting-fsmt | open-source-collab, nlp | https://huggingface.co/blog/porting-fsmt | # Porting fairseq wmt19 translation system to transformers ##### A guest blog post by Stas Bekman This article is an attempt to document how [fairseq wmt19 translation system](https://github.com/pytorch/fairseq/tree/master/examples/wmt19) was ported to [`transformers`](https://github.com/huggingface/transformers/)... |
Leveraging Pre-trained Language Model Checkpoints for Encoder-Decoder Models | patrickvonplaten | November 09, 2020 | warm-starting-encoder-decoder | guide, nlp | https://huggingface.co/blog/warm-starting-encoder-decoder | # Leveraging Pre-trained Language Model Checkpoints for Encoder-Decoder Models <a target="_blank" href="https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Leveraging_Pre_trained_Checkpoints_for_Encoder_Decoder_Models.ipynb"> <img src="https://colab.research.google.com/assets/colab-ba... |
How we sped up transformer inference 100x for 🤗 API customers | Narsil | January 18, 2021 | accelerated-inference | analysis, nlp | https://huggingface.co/blog/accelerated-inference | # How we sped up transformer inference 100x for 🤗 API customers 🤗 Transformers has become the default library for data scientists all around the world to explore state of the art NLP models and build new NLP features. With over 5,000 pre-trained and fine-tuned models available, in over 250 languages, it is a rich ... |
Fit More and Train Faster With ZeRO via DeepSpeed and FairScale | stas | January 19, 2021 | zero-deepspeed-fairscale | guide | https://huggingface.co/blog/zero-deepspeed-fairscale | # Fit More and Train Faster With ZeRO via DeepSpeed and FairScale ##### A guest blog post by Hugging Face fellow Stas Bekman As recent Machine Learning models have been growing much faster than the amount of GPU memory added to newly released cards, many users are unable to train or even just load some of those h... |
Faster TensorFlow models in Hugging Face Transformers | jplu | January 26, 2021 | tf-serving | guide, nlp | https://huggingface.co/blog/tf-serving | # Faster TensorFlow models in Hugging Face Transformers <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/10_tf_serving.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"> </a> In the last few months, the Hugging F... |
Hugging Face on PyTorch / XLA TPUs | jysohn23 | February 9, 2021 | pytorch-xla | open-source-collab | https://huggingface.co/blog/pytorch-xla | # Hugging Face on PyTorch / XLA TPUs: Faster and cheaper training <a href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/13_pytorch_xla.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a> ## Training Your Favorite Tran... |
Retrieval Augmented Generation with Huggingface Transformers and Ray | amogkam | February 10, 2021 | ray-rag | open-source-collab, nlp | https://huggingface.co/blog/ray-rag | # Retrieval Augmented Generation with Huggingface Transformers and Ray ##### A guest blog post by <a href="/amogkam">Amog Kamsetty</a> from the Anyscale team [Huggingface Transformers](https://huggingface.co/) recently added the [Retrieval Augmented Generation (RAG)](https://twitter.com/huggingface/status/131059756... |
Simple considerations for simple people building fancy neural networks | VictorSanh | February 25, 2021 | simple-considerations | guide | https://huggingface.co/blog/simple-considerations |  <span class="text-gray-500 text-xs">Photo by [Henry & Co.](https://unsplash.com/@hngstrm?utm_source=unsplash&utm_medium=referral&utm_content=creditCopyText) on [Unsplash](https://unsplash.com/s/photos/builder?utm_source=unsplash&utm_... |
Hugging Face Reads, Feb. 2021 - Long-range Transformers | VictorSanh | March 09, 2021 | long-range-transformers | research, nlp | https://huggingface.co/blog/long-range-transformers | <figure> <img src="/blog/assets/14_long_range_transformers/EfficientTransformerTaxonomy.png" alt="Efficient Transformers taxonomy"/> <figcaption>Efficient Transformers taxonomy from Efficient Transformers: a Survey by Tay et al.</figcaption> </figure> # Hugging Face Reads, Feb. 2021 - Long-range Transformers Co... |
Fine-Tune Wav2Vec2 for English ASR with 🤗 Transformers | patrickvonplaten | March 12, 2021 | fine-tune-wav2vec2-english | guide, audio | https://huggingface.co/blog/fine-tune-wav2vec2-english | # Fine-Tune Wav2Vec2 for English ASR with 🤗 Transformers <a target="_blank" href="https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_tuning_Wav2Vec2_for_English_ASR.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> Wav2Vec2 ... |
My Journey to a serverless transformers pipeline on Google Cloud | Maxence | March 18, 2021 | how-to-deploy-a-pipeline-to-google-clouds | guide | https://huggingface.co/blog/how-to-deploy-a-pipeline-to-google-clouds | # My Journey to a serverless transformers pipeline on <br>Google Cloud > ##### A guest blog post by community member <a href="/Maxence">Maxence Dominici</a> This article will discuss my journey to deploy the `transformers` _sentiment-analysis_ pipeline on [Google Cloud](https://cloud.google.com). We will start with... |
The Partnership: Amazon SageMaker and Hugging Face | philschmid | March 23, 2021 | the-partnership-amazon-sagemaker-and-hugging-face | partnerships, aws | https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face | <img src="/blog/assets/17_the_partnership_amazon_sagemaker_and_hugging_face/cover.png" alt="hugging-face-and-aws-logo" class="w-full"> > Look at these smiles! # **The Partnership: Amazon SageMaker and Hugging Face** Today, we announce a strategic partnership between Hugging Face and [Amazon](https://huggingface.co/... |
Understanding BigBird's Block Sparse Attention | vasudevgupta | March 31, 2021 | big-bird | community, research, nlp | https://huggingface.co/blog/big-bird | # Understanding BigBird's Block Sparse Attention ## Introduction Transformer-based models have shown to be very useful for many NLP tasks. However, a major limitation of transformers-based models is its \\(O(n^2)\\) time & memory complexity (where \\(n\\) is sequence length). Hence, it's computationally very expens... |
Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker | philschmid | April 8, 2021 | sagemaker-distributed-training-seq2seq | guide, partnerships, aws, nlp | https://huggingface.co/blog/sagemaker-distributed-training-seq2seq | # Distributed Training: Train BART/T5 for Summarization using 🤗 Transformers and Amazon SageMaker <a target="_blank" href="https://github.com/huggingface/notebooks/blob/master/sagemaker/08_distributed_summarization_bart_t5/sagemaker-notebook.ipynb"> <img src="https://badgen.net/badge/Github/Open/black?icon=gith... |
Introducing 🤗 Accelerate | sgugger | April 16, 2021 | accelerate-library | guide | https://huggingface.co/blog/accelerate-library | # Introducing 🤗 Accelerate ## 🤗 Accelerate Run your **raw** PyTorch training scripts on any kind of device. Most high-level libraries above PyTorch provide support for distributed training and mixed precision, but the abstraction they introduce require a user to learn a new API if they want to customize the unde... |
Scaling-up BERT Inference on CPU (Part 1) | mfuntowicz | April 20, 2021 | bert-cpu-scaling-part-1 | guide, nlp, partnerships, intel | https://huggingface.co/blog/bert-cpu-scaling-part-1 | <style> .centered { display: block; margin: 0 auto; } figure { text-align: center; display: table; max-width: 85%; /* demo; set some amount (px or %) if you can */ margin: 10px auto; /* not needed unless you want centered */ } </style> # Scaling up BERT-like model Inferen... |
Using & Mixing Hugging Face Models with Gradio 2.0 | abidlabs | May 25, 2021 | gradio | open-source-collab, guide | https://huggingface.co/blog/gradio | # Using & Mixing Hugging Face Models with Gradio 2.0 > ##### Cross-posted from the [Gradio blog](https://gradio.app/blog/using-huggingface-models). The **[Hugging Face Model Hub](https://huggingface.co/models)** has more than 10,000 machine learning models submitted by users. You’ll find all kinds of natural ... |
Few-shot learning in practice: GPT-NEO and the 🤗 Accelerated Inference API | philschmid | June 3, 2021 | few-shot-learning-gpt-neo-and-inference-api | guide, nlp | https://huggingface.co/blog/few-shot-learning-gpt-neo-and-inference-api | # Few-shot learning in practice: GPT-Neo and the 🤗 Accelerated Inference API In many Machine Learning applications, the amount of available labeled data is a barrier to producing a high-performing model. The latest developments in NLP show that you can overcome this limitation by providing a few examples at inferen... |
Sentence Transformers in the 🤗 Hub | nreimers | June 28, 2021 | sentence-transformers-in-the-hub | open-source-collab, nlp | https://huggingface.co/blog/sentence-transformers-in-the-hub | # Sentence Transformers in the Hugging Face Hub Over the past few weeks, we've built collaborations with many Open Source frameworks in the machine learning ecosystem. One that gets us particularly excited is Sentence Transformers. [Sentence Transformers](https://github.com/UKPLab/sentence-transformers) is a framew... |
Deploy Hugging Face models easily with Amazon SageMaker | philschmid | July 8, 2021 | deploy-hugging-face-models-easily-with-amazon-sagemaker | guide, partnerships, aws | https://huggingface.co/blog/deploy-hugging-face-models-easily-with-amazon-sagemaker | <img src="/blog/assets/17_the_partnership_amazon_sagemaker_and_hugging_face/cover.png" alt="hugging-face-and-aws-logo" class="w-full"> # **Deploy Hugging Face models easily with Amazon SageMaker 🏎** Earlier this year[ we announced a strategic collaboration with Amazon](https://huggingface.co/blog/the-partnership-a... |
Welcome spaCy to the 🤗 Hub | osanseviero | July 13, 2021 | spacy | open-source-collab, nlp | https://huggingface.co/blog/spacy | # Welcome spaCy to the Hugging Face Hub [spaCy](https://github.com/explosion/spaCy) is a popular library for advanced Natural Language Processing used widely across industry. spaCy makes it easy to use and train pipelines for tasks like named entity recognition, text classification, part of speech tagging and more, ... |
Deep Learning over the Internet: Training Language Models Collaboratively | mryab | July 15, 2021 | collaborative-training | research | https://huggingface.co/blog/collaborative-training | # Deep Learning over the Internet: Training Language Models Collaboratively <small> With the additional help of Quentin Lhoest and Sylvain Lesage. </small> Modern language models often require a significant amount of compute for pretraining, making it impossible to obtain them without access to tens and hundreds of... |
Introducing Optimum: The Optimization Toolkit for Transformers at Scale | mfuntowicz | September 14, 2021 | hardware-partners-program | guide | https://huggingface.co/blog/hardware-partners-program | # Introducing 🤗 Optimum: The Optimization Toolkit for Transformers at Scale This post is the first step of a journey for Hugging Face to democratize state-of-the-art **Machine Learning production performance**. To get there, we will work hand in hand with our Hardware Partners, as we have with Intel below. Join us... |
Hugging Face and Graphcore partner for IPU-optimized Transformers | sallydoherty | September 14, 2021 | graphcore | graphcore, partnerships | https://huggingface.co/blog/graphcore | # Hugging Face and Graphcore partner for IPU-optimized Transformers > ##### Speaking at the 2021 AI Hardware Summit, Hugging Face announced the launch of their new Hardware Partner Program, including device-optimized models and software integrations. Here, Graphcore - creators of the Intelligence Processing Unit (IP... |
Summer at Hugging Face ☀️ | huggingface | September 24, 2021 | summer-at-huggingface | community | https://huggingface.co/blog/summer-at-huggingface | # Summer At Hugging Face 😎 Summer is now officially over and these last few months have been quite busy at Hugging Face. From new features in the Hub to research and Open Source development, our team has been working hard to empower the community through open and collaborative technology. In this blog post you'll... |
Showcase Your Projects in Spaces using Gradio | merve | October 5, 2021 | gradio-spaces | guide | https://huggingface.co/blog/gradio-spaces | # Showcase Your Projects in Spaces using Gradio It's so easy to demonstrate a Machine Learning project thanks to [Gradio](https://gradio.app/). In this blog post, we'll walk you through: - the recent Gradio integration that helps you demo models from the Hub seamlessly with few lines of code leveraging the [Infere... |
Hosting your Models and Datasets on Hugging Face Spaces using Streamlit | merve | October 5, 2021 | streamlit-spaces | guide | https://huggingface.co/blog/streamlit-spaces | # Hosting your Models and Datasets on Hugging Face Spaces using Streamlit ## Showcase your Datasets and Models using Streamlit on Hugging Face Spaces [Streamlit](https://streamlit.io/) allows you to visualize datasets and build demos of Machine Learning models in a neat way. In this blog post we will walk you thr... |
Fine tuning CLIP with Remote Sensing (Satellite) images and captions | arampacha | October 13, 2021 | fine-tune-clip-rsicd | community, cv, nlp | https://huggingface.co/blog/fine-tune-clip-rsicd | # Fine tuning CLIP with Remote Sensing (Satellite) images and captions ## Fine tuning CLIP with Remote Sensing (Satellite) images and captions <img src="/blog/assets/30_clip_rsicd/clip-rsicd-header-image.png"/> In July this year, [Hugging Face](https://huggingface.co/) organized a [Flax/JAX Community Week](https:... |
The Age of Machine Learning As Code Has Arrived | juliensimon | October 20, 2021 | the-age-of-ml-as-code | analysis | https://huggingface.co/blog/the-age-of-ml-as-code | # The Age of Machine Learning As Code Has Arrived The 2021 edition of the [State of AI Report](https://www.stateof.ai/2021-report-launch.html) came out last week. So did the Kaggle [State of Machine Learning and Data Science Survey](https://www.kaggle.com/c/kaggle-survey-2021). There's much to be learned and disc... |
Train a Sentence Embedding Model with 1B Training Pairs | asi | October 25, 2021 | 1b-sentence-embeddings | community, nlp | https://huggingface.co/blog/1b-sentence-embeddings | # Train a Sentence Embedding Model with 1 Billion Training Pairs **Sentence embedding** is a method that maps sentences to vectors of real numbers. Ideally, these vectors would capture the semantic of a sentence and be highly generic. Such representations could then be used for many downstream applications such as c... |
Large Language Models: A New Moore's Law? | juliensimon | October 26, 2021 | large-language-models | analysis, nlp | https://huggingface.co/blog/large-language-models | # Large Language Models: A New Moore's Law? A few days ago, Microsoft and NVIDIA [introduced](https://www.microsoft.com/en-us/research/blog/using-deepspeed-and-megatron-to-train-megatron-turing-nlg-530b-the-worlds-largest-and-most-powerful-generative-language-model/) Megatron-Turing NLG 530B, a Transformer-based mo... |
Course Launch Community Event | sgugger | October 26, 2021 | course-launch-event | community, nlp | https://huggingface.co/blog/course-launch-event | # Course Launch Community Event We are excited to share that after a lot of work from the Hugging Face team, part 2 of the [Hugging Face Course](https://hf.co/course) will be released on November 15th! Part 1 focused on teaching you how to use a pretrained model, fine-tune it on a text classification task then uploa... |
Scaling up BERT-like model Inference on modern CPU - Part 2 | mfuntowicz | November 4, 2021 | bert-cpu-scaling-part-2 | partnerships, intel, guide, nlp | https://huggingface.co/blog/bert-cpu-scaling-part-2 | # Scaling up BERT-like model Inference on modern CPU - Part 2 <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> ## Introduction: Using Intel Software to Optimize AI Efficiency on CPU As we detailed in our [previous blog post](https://huggingface.co/blog/be... |
Fine-tuning XLS-R for Multi-Lingual ASR with 🤗 Transformers | patrickvonplaten | November 15, 2021 | fine-tune-xlsr-wav2vec2 | guide, audio | https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 | # Fine-tuning XLS-R for Multi-Lingual ASR with 🤗 Transformers <a target="_blank" href="https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Fine_Tune_XLS_R_on_Common_Voice.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> ***New (... |
Accelerating PyTorch distributed fine-tuning with Intel technologies | juliensimon | November 19, 2021 | accelerating-pytorch | guide | https://huggingface.co/blog/accelerating-pytorch | # Accelerating PyTorch distributed fine-tuning with Intel technologies For all their amazing performance, state of the art deep learning models often take a long time to train. In order to speed up training jobs, engineering teams rely on distributed training, a divide-and-conquer technique where clustered servers ... |
Introducing the Data Measurements Tool: an Interactive Tool for Looking at Datasets | sasha | November 29, 2021 | data-measurements-tool | research | https://huggingface.co/blog/data-measurements-tool | # Introducing the 🤗 Data Measurements Tool: an Interactive Tool for Looking at Datasets ***tl;dr:*** We made a tool you can use online to build, measure, and compare datasets. [Click to access the 🤗 Data Measurements Tool here.](https://huggingface.co/spaces/huggingface/data-measurements-tool) ----- As develo... |
Getting Started with Hugging Face Transformers for IPUs with Optimum | internetoftim | November 30, 2021 | graphcore-getting-started | partnerships, graphcore, guide | https://huggingface.co/blog/graphcore-getting-started | # Getting Started with Hugging Face Transformers for IPUs with Optimum Transformer models have proven to be extremely efficient on a wide range of machine learning tasks, such as natural language processing, audio processing, and computer vision. However, the prediction speed of these large models can make them imp... |
Introducing Snowball Fight ☃️, our First ML-Agents Environment | ThomasSimonini | December 2, 2021 | snowball-fight | research, rl | https://huggingface.co/blog/snowball-fight | # Introducing Snowball Fight ☃️, our First ML-Agents Environment We're excited to share our **first custom Deep Reinforcement Learning environment**: Snowball Fight 1vs1 🎉.  Snowball Fight is a game made with Unity ML-Agents, where you shoot snowballs... |
Training CodeParrot 🦜 from Scratch | lvwerra | December 8, 2021 | codeparrot | guide, research, nlp | https://huggingface.co/blog/codeparrot | # Training CodeParrot 🦜 from Scratch In this blog post we'll take a look at what it takes to build the technology behind [GitHub CoPilot](https://copilot.github.com/), an application that provides suggestions to programmers as they code. In this step by step guide, we'll learn how to train a large GPT-2 model call... |
Perceiver IO: a scalable, fully-attentional model that works on any modality | nielsr | December 15, 2021 | perceiver | research, guide, nlp, audio, cv | https://huggingface.co/blog/perceiver | # Perceiver IO: a scalable, fully-attentional model that works on any modality ### TLDR We've added [Perceiver IO](https://huggingface.co/docs/transformers/model_doc/perceiver) to Transformers, the first Transformer-based neural network that works on all kinds of modalities (text, images, audio, video, point clouds... |
Gradio joins Hugging Face! | abidlabs | December 21, 2021 | gradio-joins-hf | community, open-source-collab | https://huggingface.co/blog/gradio-joins-hf | # Gradio is joining Hugging Face! <p> </p> _Gradio is joining Hugging Face! By acquiring Gradio, a machine learning startup, Hugging Face will be able to offer users, developers, and data scientists the tools needed to get to high level results and create better models and tools..._ Hmm, paragraphs about acqu... |
Active Learning with AutoNLP and Prodigy | abhishek | December 23, 2021 | autonlp-prodigy | research, partnerships, nlp | https://huggingface.co/blog/autonlp-prodigy | # Active Learning with AutoNLP and Prodigy Active learning in the context of Machine Learning is a process in which you iteratively add labeled data, retrain a model and serve it to the end user. It is an endless process and requires human interaction for labeling/creating the data. In this article, we will discuss ... |
Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker | philschmid | January 11, 2022 | gptj-sagemaker | partnerships, aws, guide, nlp | https://huggingface.co/blog/gptj-sagemaker | # Deploy GPT-J 6B for inference using Hugging Face Transformers and Amazon SageMaker <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> Almost 6 months ago to the day, [EleutherAI](https://www.eleuther.ai/) released [GPT-J 6B](https://huggingface.co/Eleuthe... |
Boost Wav2Vec2 with n-gram LM in 🤗 Transformers | patrickvonplaten | January 12, 2022 | wav2vec2-with-ngram | research, guide, audio | https://huggingface.co/blog/wav2vec2-with-ngram | # Boosting Wav2Vec2 with n-grams in 🤗 Transformers <a target="_blank" href="https://colab.research.google.com/github/patrickvonplaten/notebooks/blob/master/Boosting_Wav2Vec2_with_n_grams_in_Transformers.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> **Wav... |
Case Study: Millisecond Latency using Hugging Face Infinity and modern CPUs | philschmid | January 13, 2022 | infinity-cpu-performance | analysis | https://huggingface.co/blog/infinity-cpu-performance | # Case Study: Millisecond Latency using Hugging Face Infinity and modern CPUs <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> <br> <div style="background-color: #e6f9e6; padding: 16px 32px; outline: 2px solid; border-radius: 10px;"> December 2022 Update: ... |
Welcome Stable-baselines3 to the Hugging Face Hub 🤗 | ThomasSimonini | January 21, 2022 | sb3 | open-source-collab, rl | https://huggingface.co/blog/sb3 | # Welcome Stable-baselines3 to the Hugging Face Hub 🤗 At Hugging Face, we are contributing to the ecosystem for Deep Reinforcement Learning researchers and enthusiasts. That’s why we’re happy to announce that we integrated [Stable-Baselines3](https://github.com/DLR-RM/stable-baselines3) to the Hugging Face Hub. [S... |
Supercharged Searching on the Hugging Face Hub | muellerzr | January 25, 2022 | searching-the-hub | guide | https://huggingface.co/blog/searching-the-hub | # Supercharged Searching on the Hugging Face Hub <a target="_blank" href="https://colab.research.google.com/github/muellerzr/hf-blog-notebooks/blob/main/Searching-the-Hub.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> The `huggingface_hub` library is a lig... |
Making automatic speech recognition work on large files with Wav2Vec2 in 🤗 Transformers | Narsil | February 1, 2022 | asr-chunking | guide, research, audio | https://huggingface.co/blog/asr-chunking | # Making automatic speech recognition work on large files with Wav2Vec2 in 🤗 Transformers ``` Tl;dr: This post explains how to use the specificities of the Connectionist Temporal Classification (CTC) architecture in order to achieve very good quality automatic speech recognition (ASR) even on arbitrarily long files... |
Getting Started with Sentiment Analysis using Python | FedericoPascual | February 2, 2022 | sentiment-analysis-python | sentiment-analysis, nlp, guide | https://huggingface.co/blog/sentiment-analysis-python | # Getting Started with Sentiment Analysis using Python <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> Sentiment analysis is the automated process of tagging data according to their sentiment, such as positive, negative and neutral. Sentiment analysis allo... |
Fine-Tune ViT for Image Classification with 🤗 Transformers | nateraw | February 11, 2022 | fine-tune-vit | guide, cv | https://huggingface.co/blog/fine-tune-vit | # Fine-Tune ViT for Image Classification with 🤗 Transformers <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> <a target="_blank" href="https://colab.research.google.com/github/nateraw/huggingface-hub-examples/blob/main/vit_image_classification_explained.ip... |
BERT 101 🤗 State Of The Art NLP Model Explained | britneymuller | March 2, 2022 | bert-101 | guide, nlp | https://huggingface.co/blog/bert-101 | <html itemscope itemtype="https://schema.org/FAQPage"> # BERT 101 🤗 State Of The Art NLP Model Explained <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> ## What is BERT? BERT, short for Bidirectional Encoder Representations from Transformers, is a Machi... |
Guiding Text Generation with Constrained Beam Search in 🤗 Transformers | cwkeam | March 11, 2022 | constrained-beam-search | guide, nlp | https://huggingface.co/blog/constrained-beam-search | # Guiding Text Generation with Constrained Beam Search in 🤗 Transformers <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/53_constrained_beam_search.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> ## *... |
Image search with 🤗 datasets | davanstrien | March 16, 2022 | image-search-datasets | cv | https://huggingface.co/blog/image-search-datasets | # Image search with 🤗 datasets <a target="_blank" href="https://colab.research.google.com/gist/davanstrien/e2c29fbbed20dc767e5a74e210f4237b/hf_blog_image_search.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> </a> 🤗 [`datasets`](https://huggingface.co/docs/d... |
Accelerate BERT inference with Hugging Face Transformers and AWS inferentia | philschmid | March 16, 2022 | bert-inferentia-sagemaker | partnerships, aws, guide, nlp | https://huggingface.co/blog/bert-inferentia-sagemaker | # Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> notebook: [sagemaker/18_inferentia_inference](https://github.com/huggingface/notebooks/blob/master/sagemaker/18_inferentia_infere... |
Fine-Tune a Semantic Segmentation Model with a Custom Dataset | segments-tobias | March 17, 2022 | fine-tune-segformer | guide, partnerships, cv | https://huggingface.co/blog/fine-tune-segformer | # Fine-Tune a Semantic Segmentation Model with a Custom Dataset <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/56_fine_tune_segformer.ipynb"> <img s... |
Announcing the 🤗 AI Research Residency Program | douwekiela | March 22, 2022 | ai-residency | community, research | https://huggingface.co/blog/ai-residency | # Announcing the 🤗 AI Research Residency Program 🎉 🎉 🎉 The 🤗 Research Residency Program is a 9-month opportunity to launch or advance your career in machine learning research 🚀. The goal of the residency is to help you grow into an impactful AI researcher. Residents will work alongside Researchers from our Sc... |
Machine Learning Experts - Meg Mitchell Interview | britneymuller | March 23, 2022 | meg-mitchell-interview | expert-acceleration-program, ml-experts | https://huggingface.co/blog/meg-mitchell-interview | # Machine Learning Experts - Margaret Mitchell Hey friends! Welcome to Machine Learning Experts. I'm your host, Britney Muller and today’s guest is none other than [Margaret Mitchell](https://twitter.com/mmitchell_ai) (Meg for short). Meg founded & co-led Google’s Ethical AI Group, is a pioneer in the field of Machi... |
Introducing Decision Transformers on Hugging Face 🤗 | edbeeching | March 28, 2022 | decision-transformers | open-source-collab, guide, rl | https://huggingface.co/blog/decision-transformers | # Introducing Decision Transformers on Hugging Face 🤗 At Hugging Face, we are contributing to the ecosystem for Deep Reinforcement Learning researchers and enthusiasts. Recently, we have integrated Deep RL frameworks such as [Stable-Baselines3](https://github.com/DLR-RM/stable-baselines3). And today we are happy ... |
Don't repeat yourself - 🤗 Transformers Design Philosophy | patrickvonplaten | April 5, 2022 | transformers-design-philosophy | community | https://huggingface.co/blog/transformers-design-philosophy | # ~~Don't~~ Repeat Yourself* ##### *Designing open-source libraries for modern machine learning* ## 🤗 Transformers Design Philosophy *"Don't repeat yourself"*, or **DRY**, is a well-known principle of software development. The principle originates from "The pragmatic programmer", one of the most read books on code... |
Habana Labs and Hugging Face Partner to Accelerate Transformer Model Training | susanlansing | April 12, 2022 | habana | partnerships | https://huggingface.co/blog/habana | # Habana Labs and Hugging Face Partner to Accelerate Transformer Model Training *Santa Clara and San Francisco, CA, April 12th, 2022* Powered by deep learning, transformer models deliver state-of-the-art performance on a wide range of machine learning tasks, such as natural language processing, computer vision, spe... |
Machine Learning Experts - Lewis Tunstall Interview | britneymuller | April 13, 2022 | lewis-tunstall-interview | expert-acceleration-program, ml-experts | https://huggingface.co/blog/lewis-tunstall-interview | # Machine Learning Experts - Lewis Tunstall ## 🤗 Welcome to Machine Learning Experts - Lewis Tunstall Hey friends! Welcome to Machine Learning Experts. I'm your host, Britney Muller and today’s guest is [Lewis Tunstall](https://twitter.com/_lewtun). Lewis is a Machine Learning Engineer at Hugging Face where he wo... |
CO2 Emissions and the 🤗 Hub: Leading the Charge | sasha | April 22, 2022 | carbon-emissions-on-the-hub | community, guide | https://huggingface.co/blog/carbon-emissions-on-the-hub | # CO2 Emissions and the 🤗 Hub: Leading the Charge ## What are CO2 Emissions and why are they important? Climate change is one of the greatest challenges that we are facing and reducing emissions of greenhouse gases such as carbon dioxide (CO2) is an important part of tackling this problem. Training and deployi... |
Supercharged Customer Service with Machine Learning | patrickvonplaten | April 25, 2022 | supercharge-customer-service-with-machine-learning | guide, nlp | https://huggingface.co/blog/supercharge-customer-service-with-machine-learning | # Supercharged Customer Service with Machine Learning <a target="_blank" href="https://github.com/patrickvonplaten/notebooks/blob/master/Using_%F0%9F%A4%97_Transformers_and_%F0%9F%A4%97_Datasets_filter_customer_feedback_filtering.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Op... |
Introducing Hugging Face for Education | Violette | April 25, 2022 | education | community | https://huggingface.co/blog/education | # Introducing Hugging Face for Education 🤗 Given that machine learning will make up the overwhelming majority of software development and that non-technical people will be exposed to AI systems more and more, one of the main challenges of AI is adapting and enhancing employee skills. It is also becoming necessary t... |
Getting Started with Transformers on Habana Gaudi | juliensimon | April 26, 2022 | getting-started-habana | partnerships, guide | https://huggingface.co/blog/getting-started-habana | # Getting Started with Transformers on Habana Gaudi A couple of weeks ago, we've had the pleasure to [announce](https://huggingface.co/blog/habana) that [Habana Labs](https://habana.ai) and [Hugging Face](https://huggingface.co/) would partner to accelerate Transformer model training. Habana Gaudi accelerators del... |
Director of Machine Learning Insights [Series] | britneymuller | April 27, 2022 | ml-director-insights | community, research | https://huggingface.co/blog/ml-director-insights | # Director of Machine Learning Insights [Part 1] Few seats at the Machine Learning table span both technical skills, problem solving and business acumen like Directors of Machine Learning Directors of Machine Learning and/or Data Science are often expected to design ML systems, have deep knowledge of mathematics, f... |
Opinion Classification with Kili and HuggingFace AutoTrain | alperiox | April 28, 2022 | opinion-classification-with-kili | guide | https://huggingface.co/blog/opinion-classification-with-kili | # Opinion Classification with Kili and HuggingFace AutoTrain ## Introduction Understanding your users’ needs is crucial in any user-related business. But it also requires a lot of hard work and analysis, which is quite expensive. Why not leverage Machine Learning then? With much less coding by using Auto ML. In th... |
Accelerate Large Model Training using PyTorch Fully Sharded Data Parallel | smangrul | May 2, 2022 | pytorch-fsdp | guide | https://huggingface.co/blog/pytorch-fsdp | # Accelerate Large Model Training using PyTorch Fully Sharded Data Parallel In this post we will look at how we can leverage **[Accelerate](https://github.com/huggingface/accelerate)** Library for training large models which enables users to leverage the latest features of **[PyTorch FullyShardedDataParallel (FSDP)]... |
An Introduction to Deep Reinforcement Learning | ThomasSimonini | May 4, 2022 | deep-rl-intro | rl | https://huggingface.co/blog/deep-rl-intro | # An Introduction to Deep Reinforcement Learning <h2>Chapter 1 of the <a href="https://github.com/huggingface/deep-rl-class">Deep Reinforcement Learning Class with Hugging Face 🤗</a></h2> ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introductio... |
Welcome fastai to the Hugging Face Hub | espejelomar | May 6, 2022 | fastai | guide, open-source-collab, community | https://huggingface.co/blog/fastai | # Welcome fastai to the Hugging Face Hub ## Making neural nets uncool again... and sharing them <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/64_fastai_hub.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> <... |
We Raised $100 Million for Open & Collaborative Machine Learning 🚀 | The Hugging Face Team | May 9, 2022 | series-c | news | https://huggingface.co/blog/series-c | # We Raised $100 Million for Open & Collaborative Machine Learning 🚀 Today we have some exciting news to share! Hugging Face has raised $100 Million in Series C funding 🔥🔥🔥 led by Lux Capital with major participations from Sequoia, Coatue and support of existing investors Addition, a_capital, SV Angel, Betaworks... |
Accelerated Inference with Optimum and Transformers Pipelines | philschmid | May 10, 2022 | optimum-inference | guide, community | https://huggingface.co/blog/optimum-inference | # Accelerated Inference with Optimum and Transformers Pipelines > Inference has landed in Optimum with support for Hugging Face Transformers pipelines, including text-generation using ONNX Runtime. The adoption of BERT and Transformers continues to grow. Transformer-based models are now not only achieving state-of-... |
Student Ambassador Program's call for applications is open! | Violette | May 13, 2022 | ambassadors | community | https://huggingface.co/blog/ambassadors | # Student Ambassador Program’s call for applications is open! As an open-source company democratizing machine learning, Hugging Face believes it is essential to **[teach](https://huggingface.co/blog/education)** open-source ML to people from all backgrounds worldwide. **We aim to teach machine learning to 5 million ... |
Director of Machine Learning Insights [Part 2: SaaS Edition] | britneymuller | May 13, 2022 | ml-director-insights-2 | community, research | https://huggingface.co/blog/ml-director-insights-2 | # Director of Machine Learning Insights [Part 2: SaaS Edition] _If you or your team are interested in building ML solutions faster visit [hf.co/support](https://huggingface.co/support?utm_source=article&utm_medium=blog&utm_campaign=ml_director_insights_2) today!_ 👋 Welcome to Part 2 of our Director of Machine Lea... |
Gradio 3.0 is Out! | abidlabs | May 16, 2022 | gradio-blocks | community, open-source-collab | https://huggingface.co/blog/gradio-blocks | # Gradio 3.0 is Out! ### Machine Learning Demos Machine learning demos are an increasingly vital part of releasing a model. Demos allow anyone — not just ML engineers — to try out a model in the browser, give feedback on predictions, and build trust in the model if it performs well. More than 600,000 ML demos ha... |
Announcing the Hugging Face Fellowship Program | espejelomar | May 17, 2022 | fellowship | community | https://huggingface.co/blog/fellowship | # Announcing the Hugging Face Fellowship Program The Fellowship is a network of exceptional people from different backgrounds who contribute to the Machine Learning open-source ecosystem 🚀. The goal of the program is to empower key contributors to enable them to scale their impact while inspiring others to contrib... |
Machine Learning Experts - Sasha Luccioni Interview | britneymuller | May 17, 2022 | sasha-luccioni-interview | expert-acceleration-program, ml-experts | https://huggingface.co/blog/sasha-luccioni-interview | # Machine Learning Experts - Sasha Luccioni ## 🤗 Welcome to Machine Learning Experts - Sasha Luccioni 🚀 _If you're interested in learning how ML Experts, like Sasha, can help accelerate your ML roadmap visit: <a href="https://huggingface.co/support?utm_source=blog&utm_medium=blog&utm_campaign=ml_experts&utm_conte... |
An Introduction to Q-Learning Part 1 | ThomasSimonini | May 18, 2022 | deep-rl-q-part1 | rl | https://huggingface.co/blog/deep-rl-q-part1 | # An Introduction to Q-Learning Part 1 <h2>Unit 2, part 1 of the <a href="https://github.com/huggingface/deep-rl-class">Deep Reinforcement Learning Class with Hugging Face 🤗</a></h2> ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https... |
Putting ethical principles at the core of research lifecycle | SaulLu | May 19, 2022 | ethical-charter-multimodal | research, nlp, audio, cv | https://huggingface.co/blog/ethical-charter-multimodal | # Putting ethical principles at the core of the research lifecycle ## Ethical charter - Multimodal project ## Purpose of the ethical charter It has been well documented that machine learning research and applications can potentially lead to "data privacy issues, algorithmic biases, automation risks and malicious u... |
How Sempre Health is leveraging the Expert Acceleration Program to accelerate their ML roadmap | federicopascual | May 19, 2022 | sempre-health-eap-case-study | expert-acceleration-program, case-study, case-studies | https://huggingface.co/blog/sempre-health-eap-case-study | # How Sempre Health is leveraging the Expert Acceleration Program to accelerate their ML roadmap 👋 Hello, friends! We recently sat down with [Swaraj Banerjee](https://www.linkedin.com/in/swarajbanerjee/) and [Larry Zhang](https://www.linkedin.com/in/larry-zhang-b58642a3/) from [Sempre Health](https://www.semprehea... |
An Introduction to Q-Learning Part 2 | ThomasSimonini | May 20, 2022 | deep-rl-q-part2 | rl | https://huggingface.co/blog/deep-rl-q-part2 | # An Introduction to Q-Learning Part 2/2 <h2>Unit 2, part 2 of the <a href="https://github.com/huggingface/deep-rl-class">Deep Reinforcement Learning Class with Hugging Face 🤗</a></h2> ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](htt... |
Efficient Table Pre-training without Real Data: An Introduction to TAPEX | SivilTaram | May 23, 2022 | tapex | research, nlp, community | https://huggingface.co/blog/tapex | # Efficient Table Pre-training without Real Data: An Introduction to TAPEX In recent years, language model pre-training has achieved great success via leveraging large-scale textual data. By employing pre-training tasks such as [masked language modeling](https://arxiv.org/abs/1810.04805), these models have demonstr... |
Introducing Pull Requests and Discussions 🥳 | victor | May 25, 2022 | community-update | launch | https://huggingface.co/blog/community-update | # Introducing Pull Requests and Discussions 🥳  We are thrilled to announce the release of our latest collaborative features: pull requests and discussions on the Hugging Face Hub! Pull requests and discussions are available... |
Graphcore and Hugging Face Launch New Lineup of IPU-Ready Transformers | sallydoherty | May 26, 2022 | graphcore-update | graphcore, partnerships | https://huggingface.co/blog/graphcore-update | # Graphcore and Hugging Face Launch New Lineup of IPU-Ready Transformers [Graphcore](https://huggingface.co/hardware/graphcore/) and Hugging Face have significantly expanded the range of Machine Learning modalities and tasks available in [Hugging Face Optimum](https://github.com/huggingface/optimum), an open-source ... |
Deep Q-Learning with Atari | ThomasSimonini | June 7, 2022 | deep-rl-dqn | rl | https://huggingface.co/blog/deep-rl-dqn | # Deep Q-Learning with Space Invaders <h2>Unit 3, of the <a href="https://github.com/huggingface/deep-rl-class">Deep Reinforcement Learning Class with Hugging Face 🤗</a></h2> ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://hugg... |
The Annotated Diffusion Model | nielsr | June 7, 2022 | annotated-diffusion | guide, diffusion, stable-diffusion | https://huggingface.co/blog/annotated-diffusion | # The Annotated Diffusion Model <script async defer src="https://unpkg.com/medium-zoom-element@0/dist/medium-zoom-element.min.js"></script> <a target="_blank" href="https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/annotated_diffusion.ipynb"> <img src="https://colab.research.goog... |
Director of Machine Learning Insights [Part 3: Finance Edition] | britneymuller | June 14, 2022 | ml-director-insights-3 | community, research | https://huggingface.co/blog/ml-director-insights-3 | # Director of Machine Learning Insights [Part 3: Finance Edition] _If you're interested in building ML solutions faster visit [hf.co/support](https://huggingface.co/support?utm_source=article&utm_medium=blog&utm_campaign=ml_director_insights_3) today!_ 👋 Welcome back to our Director of ML Insights Series, Finance... |
Intel and Hugging Face Partner to Democratize Machine Learning Hardware Acceleration | juliensimon | June 15, 2022 | intel | hardware, intel, guide | https://huggingface.co/blog/intel | # Intel and Hugging Face Partner to Democratize Machine Learning Hardware Acceleration  The mission of Hugging Face is to democratize good machine learning and maximize its positive impact across industries and society. Not only do we strive to advance Transformer models, but we ... |
Convert Transformers to ONNX with Hugging Face Optimum | philschmid | June 22, 2022 | convert-transformers-to-onnx | guide, community, hardware | https://huggingface.co/blog/convert-transformers-to-onnx | # Convert Transformers to ONNX with Hugging Face Optimum Hundreds of Transformers experiments and models are uploaded to the [Hugging Face Hub](https://huggingface.co/) every single day. Machine learning engineers and students conducting those experiments use a variety of frameworks like PyTorch, TensorFlow/Keras, or... |
Getting Started With Embeddings | espejelomar | June 23, 2022 | getting-started-with-embeddings | guide, nlp | https://huggingface.co/blog/getting-started-with-embeddings | # Getting Started With Embeddings Check out this tutorial with the Notebook Companion: <a target="_blank" href="https://colab.research.google.com/github/huggingface/blog/blob/main/notebooks/80_getting_started_with_embeddings.ipynb"> <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In... |
Announcing Evaluation on the Hub | douwekiela | June 28, 2022 | eval-on-the-hub | community, launch, guide | https://huggingface.co/blog/eval-on-the-hub | # Announcing Evaluation on the Hub <br> <div style="background-color: #e6f9e6; padding: 16px 32px; outline: 2px solid; border-radius: 10px;"> November 2023 Update: This project has been archived. If you want to evaluate LLMs on the Hub, check out [this collection of leaderboards](https://huggingface.co/collectio... |
Accelerate Large Model Training using DeepSpeed | smangrul | June 28, 2022 | accelerate-deepspeed | guide | https://huggingface.co/blog/accelerate-deepspeed | # Accelerate Large Model Training using DeepSpeed In this post we will look at how we can leverage the **[Accelerate](https://github.com/huggingface/accelerate)** library for training large models which enables users to leverage the ZeRO features of **[DeeSpeed](https://www.deepspeed.ai)**. # Motivation 🤗 **Tired ... |
Liftoff! How to get started with your first ML project 🚀 | nimaboscarino | June 29, 2022 | your-first-ml-project | guide | https://huggingface.co/blog/your-first-ml-project | # Liftoff! How to get started with your first ML project 🚀 People who are new to the Machine Learning world often run into two recurring stumbling blocks. The first is choosing the right library to learn, which can be daunting when there are so many to pick from. Even once you’ve settled on a library and gone throu... |
Policy Gradient with PyTorch | ThomasSimonini | June 30, 2022 | deep-rl-pg | rl | https://huggingface.co/blog/deep-rl-pg | # Policy Gradient with PyTorch <h2>Unit 5, of the <a href="https://github.com/huggingface/deep-rl-class">Deep Reinforcement Learning Class with Hugging Face 🤗</a></h2> ⚠️ A **new updated version of this article is available here** 👉 [https://huggingface.co/deep-rl-course/unit1/introduction](https://huggingface.... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.