The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    TypeError
Message:      Couldn't cast array of type string to null
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in cast_table_to_schema
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in <listcomp>
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2116, in cast_array_to_feature
                  return array_cast(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1804, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1962, in array_cast
                  raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
              TypeError: Couldn't cast array of type string to null
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1524, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1099, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

url
string
repository_url
string
labels_url
string
comments_url
string
events_url
string
html_url
string
id
int64
node_id
string
number
int64
title
string
user
dict
labels
list
state
string
locked
bool
assignee
dict
assignees
list
milestone
null
comments
int64
created_at
int64
updated_at
int64
closed_at
null
author_association
string
active_lock_reason
null
body
string
performed_via_github_app
null
pull_request
null
https://api.github.com/repos/huggingface/transformers/issues/11046
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11046/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11046/comments
https://api.github.com/repos/huggingface/transformers/issues/11046/events
https://github.com/huggingface/transformers/issues/11046
849,568,459
MDU6SXNzdWU4NDk1Njg0NTk=
11,046
Potential incorrect application of layer norm in BlenderbotSmallDecoder
{ "login": "sougata-ub", "id": 59206549, "node_id": "MDQ6VXNlcjU5MjA2NTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/59206549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sougata-ub", "html_url": "https://github.com/sougata-ub", "followers_url": "https://api.github.com/use...
[]
open
false
null
[]
null
0
1,617,421,052,000
1,617,421,052,000
null
NONE
null
In BlenderbotSmallDecoder, layer norm is applied only on the token embeddings, and not on the hidden_states, whereas in the BlenderbotSmallEncoder, layer norm is applied after adding the input_embeds and positional embeds BlenderbotSmallEncoder: `hidden_states = inputs_embeds + embed_pos` `hidden_states = self.la...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11045
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11045/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11045/comments
https://api.github.com/repos/huggingface/transformers/issues/11045/events
https://github.com/huggingface/transformers/issues/11045
849,544,374
MDU6SXNzdWU4NDk1NDQzNzQ=
11,045
Multi-GPU seq2seq example evaluation significantly slower than legacy example evaluation
{ "login": "PeterAJansen", "id": 3813268, "node_id": "MDQ6VXNlcjM4MTMyNjg=", "avatar_url": "https://avatars.githubusercontent.com/u/3813268?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PeterAJansen", "html_url": "https://github.com/PeterAJansen", "followers_url": "https://api.github.com...
[]
open
false
null
[]
null
0
1,617,411,144,000
1,617,411,144,000
null
NONE
null
### Who can help @patil-suraj @sgugger Models: T5 ## Information I've been doing multi-GPU evaluation for some weeks using a Transformers pull from Feb 12th, just using the example scripts for training/evaluating custom datasets (specifically `run_distributed_eval.py` , though that seq2seq example is now ...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11044
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11044/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11044/comments
https://api.github.com/repos/huggingface/transformers/issues/11044/events
https://github.com/huggingface/transformers/issues/11044
849,529,761
MDU6SXNzdWU4NDk1Mjk3NjE=
11,044
[DeepSpeed] ZeRO stage 3 integration: getting started and issues
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/fo...
[ { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" } ]
open
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/fo...
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github...
null
0
1,617,406,842,000
1,617,408,018,000
null
COLLABORATOR
null
**[This is not yet alive, preparing for the release, so please ignore for now]** The DeepSpeed ZeRO-3 has been integrated into HF `transformers`. While I tried to write tests for a wide range of situations I'm sure I've missed some scenarios so if you run into any problems please file a separate issue. I'm going...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11043
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11043/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11043/comments
https://api.github.com/repos/huggingface/transformers/issues/11043/events
https://github.com/huggingface/transformers/issues/11043
849,499,734
MDU6SXNzdWU4NDk0OTk3MzQ=
11,043
Can't load model to estimater
{ "login": "gwc4github", "id": 3164663, "node_id": "MDQ6VXNlcjMxNjQ2NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/3164663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gwc4github", "html_url": "https://github.com/gwc4github", "followers_url": "https://api.github.com/users...
[]
open
false
null
[]
null
0
1,617,400,304,000
1,617,400,304,000
null
NONE
null
I was trying to follow the Sagemaker instructions [here](https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-model.html) to load the model I just trained and test an estimation. I get the error message: NotImplementedError: Creating model with HuggingFace training job is not supported. Can someone share some s...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11042
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11042/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11042/comments
https://api.github.com/repos/huggingface/transformers/issues/11042/events
https://github.com/huggingface/transformers/issues/11042
849,274,362
MDU6SXNzdWU4NDkyNzQzNjI=
11,042
[LXMERT] Unclear what img_tensorize does with color spaces
{ "login": "hivestrung", "id": 27841209, "node_id": "MDQ6VXNlcjI3ODQxMjA5", "avatar_url": "https://avatars.githubusercontent.com/u/27841209?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hivestrung", "html_url": "https://github.com/hivestrung", "followers_url": "https://api.github.com/use...
[]
open
false
null
[]
null
0
1,617,376,377,000
1,617,376,507,000
null
NONE
null
## Environment info - `transformers` version: Not using transformers directly, I'm loading a model "unc-nlp/frcnn-vg-finetuned" - Platform: MacOS - Python version: 3.8 - PyTorch version (GPU?): 1.6.0, no GPU - Tensorflow version (GPU?): don't have - Using GPU in script?: no - Using distributed or parallel set-...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11041
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11041/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11041/comments
https://api.github.com/repos/huggingface/transformers/issues/11041/events
https://github.com/huggingface/transformers/pull/11041
849,269,684
MDExOlB1bGxSZXF1ZXN0NjA4MDcxNjc1
11,041
wav2vec2 converter: create the proper vocab.json while converting fairseq wav2vec2 finetuned model
{ "login": "cceyda", "id": 15624271, "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cceyda", "html_url": "https://github.com/cceyda", "followers_url": "https://api.github.com/users/cceyda/fo...
[]
open
false
null
[]
null
0
1,617,375,854,000
1,617,377,521,000
null
CONTRIBUTOR
null
# What does this PR do? While converting a finetuned wav2vec2 model we also need to convert the related dictionary `dict.ltr.txt` to hugging face `vocab.json` format. If a `dict_path` is specified: - Creates&saves the necessary vocab.json file - Modifies config file special token ids and vocab size accordin...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11040
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11040/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11040/comments
https://api.github.com/repos/huggingface/transformers/issues/11040/events
https://github.com/huggingface/transformers/issues/11040
849,265,615
MDU6SXNzdWU4NDkyNjU2MTU=
11,040
max_length in beam_search() and group_beam_search() does not consider beam_scorer.max_length
{ "login": "GeetDsa", "id": 13940397, "node_id": "MDQ6VXNlcjEzOTQwMzk3", "avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GeetDsa", "html_url": "https://github.com/GeetDsa", "followers_url": "https://api.github.com/users/GeetDs...
[]
open
false
null
[]
null
0
1,617,375,392,000
1,617,375,452,000
null
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: 4.3.2 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.0 - Using GPU in script?: No - Us...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11039
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11039/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11039/comments
https://api.github.com/repos/huggingface/transformers/issues/11039/events
https://github.com/huggingface/transformers/issues/11039
849,244,819
MDU6SXNzdWU4NDkyNDQ4MTk=
11,039
Trainer not logging into Tensorboard
{ "login": "thomas-happify", "id": 66082334, "node_id": "MDQ6VXNlcjY2MDgyMzM0", "avatar_url": "https://avatars.githubusercontent.com/u/66082334?v=4", "gravatar_id": "", "url": "https://api.github.com/users/thomas-happify", "html_url": "https://github.com/thomas-happify", "followers_url": "https://api.gi...
[]
open
false
null
[]
null
0
1,617,373,074,000
1,617,387,532,000
null
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.5.0.dev0 - Platform: Ubuntu 18.04.5 LTS (x86_64) - Python version: 3.7.0 - PyTorch version (GPU?): 1.7.1+c...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11038
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11038/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11038/comments
https://api.github.com/repos/huggingface/transformers/issues/11038/events
https://github.com/huggingface/transformers/issues/11038
849,180,384
MDU6SXNzdWU4NDkxODAzODQ=
11,038
DeBERTa xlarge v2 throwing runtime error
{ "login": "roshan-k-patel", "id": 48667731, "node_id": "MDQ6VXNlcjQ4NjY3NzMx", "avatar_url": "https://avatars.githubusercontent.com/u/48667731?v=4", "gravatar_id": "", "url": "https://api.github.com/users/roshan-k-patel", "html_url": "https://github.com/roshan-k-patel", "followers_url": "https://api.gi...
[]
open
false
null
[]
null
4
1,617,365,153,000
1,617,372,009,000
null
NONE
null
- `transformers` version: 4.4.2 - Platform: Linux-3.10.0-1127.el7.x86_64-x86_64-with-redhat-7.8-Maipo - Python version: 3.6.13 - PyTorch version (GPU?): 1.7.1 (True) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script: yes ``` RuntimeError: Error(s) in loading state_dict for DebertaForSequenc...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11036
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11036/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11036/comments
https://api.github.com/repos/huggingface/transformers/issues/11036/events
https://github.com/huggingface/transformers/issues/11036
848,996,240
MDU6SXNzdWU4NDg5OTYyNDA=
11,036
BertForTokenClassification class ignores long tokens when making predictions
{ "login": "guanqun-yang", "id": 36497361, "node_id": "MDQ6VXNlcjM2NDk3MzYx", "avatar_url": "https://avatars.githubusercontent.com/u/36497361?v=4", "gravatar_id": "", "url": "https://api.github.com/users/guanqun-yang", "html_url": "https://github.com/guanqun-yang", "followers_url": "https://api.github.c...
[]
open
false
null
[]
null
0
1,617,344,535,000
1,617,349,504,000
null
NONE
null
# Goal I am trying to run the adapted version of `run_ner.py` hosted [here](https://github.com/huggingface/transformers/tree/master/examples/token-classification) (see MWE session for my code) on my custom dataset. The dataset I am using has some extra-long tokens (mainly URLs). When I obtained the predictions af...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11035
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11035/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11035/comments
https://api.github.com/repos/huggingface/transformers/issues/11035/events
https://github.com/huggingface/transformers/issues/11035
848,976,468
MDU6SXNzdWU4NDg5NzY0Njg=
11,035
404 Client Error: Not Found for url: https://huggingface.co/%5CHuggingface-Sentiment-Pipeline/resolve/main/config.json
{ "login": "nithinreddyy", "id": 56256685, "node_id": "MDQ6VXNlcjU2MjU2Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/56256685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nithinreddyy", "html_url": "https://github.com/nithinreddyy", "followers_url": "https://api.github.c...
[]
open
false
null
[]
null
4
1,617,341,944,000
1,617,369,878,000
null
NONE
null
I'm trying to use the hugging face sentimet-analysis pipeline. I've downloaded the pipeline using save.pretrained(model). And trying to load the pipeline with the help of below code ``` from transformers import pipeline model = '\Huggingface-Sentiment-Pipeline' classifier = pipeline(task='sentiment-analysis', mod...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11034
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11034/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11034/comments
https://api.github.com/repos/huggingface/transformers/issues/11034/events
https://github.com/huggingface/transformers/issues/11034
848,939,310
MDU6SXNzdWU4NDg5MzkzMTA=
11,034
GPT-2 example is broken?
{ "login": "ba305", "id": 35350330, "node_id": "MDQ6VXNlcjM1MzUwMzMw", "avatar_url": "https://avatars.githubusercontent.com/u/35350330?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ba305", "html_url": "https://github.com/ba305", "followers_url": "https://api.github.com/users/ba305/follow...
[]
open
false
null
[]
null
2
1,617,335,800,000
1,617,384,338,000
null
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: I have had this issue with both 4.3.0 and 4.4.2 (and probably other versions as well) - Python version: 3.7.6 ...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11033
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11033/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11033/comments
https://api.github.com/repos/huggingface/transformers/issues/11033/events
https://github.com/huggingface/transformers/issues/11033
848,936,573
MDU6SXNzdWU4NDg5MzY1NzM=
11,033
RuntimeError: The size of tensor a (1024) must match the size of tensor b (1025) at non-singleton dimension 3
{ "login": "yananchen1989", "id": 26405281, "node_id": "MDQ6VXNlcjI2NDA1Mjgx", "avatar_url": "https://avatars.githubusercontent.com/u/26405281?v=4", "gravatar_id": "", "url": "https://api.github.com/users/yananchen1989", "html_url": "https://github.com/yananchen1989", "followers_url": "https://api.githu...
[]
open
false
null
[]
null
0
1,617,335,157,000
1,617,335,157,000
null
NONE
null
Here I try to use gpt2 to generation the text under the prompt text. I have several datasets, some of them, such as AG_NEWS and POP_NEWS, are made of short sentences while when I use YAHOO_NEWS, consisting of longer sentences, the error came out. Anything to modify for my codes? Thanks. ``` from transformers imp...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11032
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11032/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11032/comments
https://api.github.com/repos/huggingface/transformers/issues/11032/events
https://github.com/huggingface/transformers/issues/11032
848,921,982
MDU6SXNzdWU4NDg5MjE5ODI=
11,032
How to get masked word prediction for other languages
{ "login": "AnnaSou", "id": 43326583, "node_id": "MDQ6VXNlcjQzMzI2NTgz", "avatar_url": "https://avatars.githubusercontent.com/u/43326583?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AnnaSou", "html_url": "https://github.com/AnnaSou", "followers_url": "https://api.github.com/users/AnnaSo...
[]
open
false
null
[]
null
3
1,617,332,380,000
1,617,418,490,000
null
NONE
null
Hello, I trying to get masked words predictions for languages except English with Roberta or XLM Roberta. ``` from transformers import pipeline nlp = pipeline("fill-mask", model="roberta-base") template = f"That woman is {nlp.tokenizer.mask_token}." output = nlp(template) nlp4 = pipeline("fill-mask", model...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11030
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11030/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11030/comments
https://api.github.com/repos/huggingface/transformers/issues/11030/events
https://github.com/huggingface/transformers/issues/11030
848,823,702
MDU6SXNzdWU4NDg4MjM3MDI=
11,030
pipeline.from_pretrained
{ "login": "cronoik", "id": 18630848, "node_id": "MDQ6VXNlcjE4NjMwODQ4", "avatar_url": "https://avatars.githubusercontent.com/u/18630848?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cronoik", "html_url": "https://github.com/cronoik", "followers_url": "https://api.github.com/users/cronoi...
[]
open
false
null
[]
null
0
1,617,315,416,000
1,617,315,451,000
null
CONTRIBUTOR
null
# 🚀 Feature request Nearly everyone who is using the transformers library is aware of the `from_pretrained()` and `save_pretrained()` concept. The [Pipeline class](https://huggingface.co/transformers/main_classes/pipelines.html#parent-class-pipeline) is currently only providing the `save_pretrained()` method which ...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11029
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11029/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11029/comments
https://api.github.com/repos/huggingface/transformers/issues/11029/events
https://github.com/huggingface/transformers/pull/11029
848,798,224
MDExOlB1bGxSZXF1ZXN0NjA3Njc4Nzg3
11,029
Documentation about loading a fast tokenizer within Transformers
{ "login": "LysandreJik", "id": 30755778, "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "gravatar_id": "", "url": "https://api.github.com/users/LysandreJik", "html_url": "https://github.com/LysandreJik", "followers_url": "https://api.github.com/...
[]
open
false
null
[]
null
0
1,617,312,168,000
1,617,312,168,000
null
MEMBER
null
This PR does two things: - Allows to load a fast tokenizer from an instantiated `tokenizers` object - Adds a page to document how to use these tokenizers within `transformers` See [here](https://190138-155220641-gh.circle-artifacts.com/0/docs/_build/html/fast_tokenizers.html) for the generated docs
null
null
https://api.github.com/repos/huggingface/transformers/issues/11028
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11028/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11028/comments
https://api.github.com/repos/huggingface/transformers/issues/11028/events
https://github.com/huggingface/transformers/issues/11028
848,769,061
MDU6SXNzdWU4NDg3NjkwNjE=
11,028
Fine Tune GPT-NEO 2.7B
{ "login": "antocapp", "id": 26765504, "node_id": "MDQ6VXNlcjI2NzY1NTA0", "avatar_url": "https://avatars.githubusercontent.com/u/26765504?v=4", "gravatar_id": "", "url": "https://api.github.com/users/antocapp", "html_url": "https://github.com/antocapp", "followers_url": "https://api.github.com/users/ant...
[]
open
false
null
[]
null
1
1,617,309,086,000
1,617,312,297,000
null
NONE
null
Hello to everyone, is there a script to fine tune this new model? Thanks
null
null
https://api.github.com/repos/huggingface/transformers/issues/11027
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11027/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11027/comments
https://api.github.com/repos/huggingface/transformers/issues/11027/events
https://github.com/huggingface/transformers/pull/11027
848,767,936
MDExOlB1bGxSZXF1ZXN0NjA3NjUzMTAy
11,027
[WIP] Refactor AutoModel classes and add Flax Auto classes
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugge...
[]
open
false
null
[]
null
0
1,617,308,974,000
1,617,310,405,000
null
MEMBER
null
# What does this PR do? This PR refactors the logic behind all the Auto model classes in one function that automatically builds those classes from a template. In passing, it uses this new function to build the auto classes for FLAX (at least the ones that have at least one model implemented).
null
null
https://api.github.com/repos/huggingface/transformers/issues/11026
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11026/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11026/comments
https://api.github.com/repos/huggingface/transformers/issues/11026/events
https://github.com/huggingface/transformers/pull/11026
848,754,983
MDExOlB1bGxSZXF1ZXN0NjA3NjQyMjM1
11,026
Add `examples/language_modeling/run_clm_no_trainer.py`
{ "login": "hemildesai", "id": 8195444, "node_id": "MDQ6VXNlcjgxOTU0NDQ=", "avatar_url": "https://avatars.githubusercontent.com/u/8195444?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hemildesai", "html_url": "https://github.com/hemildesai", "followers_url": "https://api.github.com/users...
[]
open
false
null
[]
null
0
1,617,307,709,000
1,617,314,068,000
null
CONTRIBUTOR
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this w...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11024
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11024/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11024/comments
https://api.github.com/repos/huggingface/transformers/issues/11024/events
https://github.com/huggingface/transformers/pull/11024
848,717,134
MDExOlB1bGxSZXF1ZXN0NjA3NjEwNjQ1
11,024
Add a script to check inits are consistent
{ "login": "sgugger", "id": 35901082, "node_id": "MDQ6VXNlcjM1OTAxMDgy", "avatar_url": "https://avatars.githubusercontent.com/u/35901082?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sgugger", "html_url": "https://github.com/sgugger", "followers_url": "https://api.github.com/users/sgugge...
[]
open
false
null
[]
null
0
1,617,304,032,000
1,617,312,410,000
null
MEMBER
null
# What does this PR do? Most inits in the project define the same objects twice (once in `_import_structure` and once in TYPE_CHECKING) to have a fast import so objects are only grabbed when actually needed. The problem is that those two halves have a tendency to diverge as contributors do not always pay attention t...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11023
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11023/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11023/comments
https://api.github.com/repos/huggingface/transformers/issues/11023/events
https://github.com/huggingface/transformers/issues/11023
848,680,168
MDU6SXNzdWU4NDg2ODAxNjg=
11,023
Strange ValueError with GPT-2
{ "login": "AI-Guru", "id": 32195399, "node_id": "MDQ6VXNlcjMyMTk1Mzk5", "avatar_url": "https://avatars.githubusercontent.com/u/32195399?v=4", "gravatar_id": "", "url": "https://api.github.com/users/AI-Guru", "html_url": "https://github.com/AI-Guru", "followers_url": "https://api.github.com/users/AI-Gur...
[]
open
false
null
[]
null
2
1,617,300,552,000
1,617,345,060,000
null
NONE
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: 4.4.2 - Platform: macOS-10.15.7-x86_64-i386-64bit - Python version: 3.8.6 - PyTorch version (GPU?): 1.7.1 (F...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11022
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11022/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11022/comments
https://api.github.com/repos/huggingface/transformers/issues/11022/events
https://github.com/huggingface/transformers/issues/11022
848,679,174
MDU6SXNzdWU4NDg2NzkxNzQ=
11,022
cannot import name 'AutoModelForSequenceClassification' from 'transformers'
{ "login": "nithinreddyy", "id": 56256685, "node_id": "MDQ6VXNlcjU2MjU2Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/56256685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/nithinreddyy", "html_url": "https://github.com/nithinreddyy", "followers_url": "https://api.github.c...
[]
open
false
null
[]
null
1
1,617,300,456,000
1,617,315,547,000
null
NONE
null
``` from transformers import pipeline classifier = pipeline('sentiment-analysis') #This code will download the pipeline classifier('We are very happy to show you the 🤗 Transformers library.') classifier.save_pretrained('/some/directory') ``` I'm trying to save the model and trying to perform the sentiment-ana...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11021
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11021/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11021/comments
https://api.github.com/repos/huggingface/transformers/issues/11021/events
https://github.com/huggingface/transformers/issues/11021
848,651,434
MDU6SXNzdWU4NDg2NTE0MzQ=
11,021
Module Not found: datasets_modules.datasets.output
{ "login": "ashleylew", "id": 68515763, "node_id": "MDQ6VXNlcjY4NTE1NzYz", "avatar_url": "https://avatars.githubusercontent.com/u/68515763?v=4", "gravatar_id": "", "url": "https://api.github.com/users/ashleylew", "html_url": "https://github.com/ashleylew", "followers_url": "https://api.github.com/users/...
[]
open
false
null
[]
null
0
1,617,297,828,000
1,617,297,861,000
null
NONE
null
## Environment info - `transformers` version: 4.5.0.dev0 - Platform: Linux-3.10.0-1160.15.2.el7.x86_64-x86_64-with-glibc2.10 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.1 (False) - Tensorflow version (GPU?): not installed (NA) - Using GPU in script?: not sure - Using distributed or parallel set-up in...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11020
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11020/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11020/comments
https://api.github.com/repos/huggingface/transformers/issues/11020/events
https://github.com/huggingface/transformers/issues/11020
848,566,666
MDU6SXNzdWU4NDg1NjY2NjY=
11,020
Trainer API crashes GPUs
{ "login": "dmitriydligach", "id": 5121609, "node_id": "MDQ6VXNlcjUxMjE2MDk=", "avatar_url": "https://avatars.githubusercontent.com/u/5121609?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dmitriydligach", "html_url": "https://github.com/dmitriydligach", "followers_url": "https://api.gith...
[]
open
false
null
[]
null
2
1,617,290,704,000
1,617,295,389,000
null
NONE
null
## Environment info - `transformers` version: 4.5.0.dev0 - Platform: Ubuntu 20.04.2 LTS - Python version: Python 3.8.5 - PyTorch version (GPU?): 1.7.1 - Tensorflow version (GPU?): 2.4.1 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes My scripts that use Trainer API crash G...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11019
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11019/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11019/comments
https://api.github.com/repos/huggingface/transformers/issues/11019/events
https://github.com/huggingface/transformers/issues/11019
848,543,462
MDU6SXNzdWU4NDg1NDM0NjI=
11,019
Enable multiple `eval_dataset` in `Trainer` API
{ "login": "simonschoe", "id": 53626067, "node_id": "MDQ6VXNlcjUzNjI2MDY3", "avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4", "gravatar_id": "", "url": "https://api.github.com/users/simonschoe", "html_url": "https://github.com/simonschoe", "followers_url": "https://api.github.com/use...
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
null
1
1,617,289,031,000
1,617,298,990,000
null
NONE
null
# 🚀 Feature request Allow for two or more (equally long) validation sets to be passed to the `Trainer` API which are evaluated sequentially each `eval_steps`. ## Motivation You can find my motivation in this [thread](https://discuss.huggingface.co/t/use-trainer-api-with-two-valiation-sets/5212) and the refere...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11018
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11018/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11018/comments
https://api.github.com/repos/huggingface/transformers/issues/11018/events
https://github.com/huggingface/transformers/issues/11018
848,537,240
MDU6SXNzdWU4NDg1MzcyNDA=
11,018
T5 documentation for computing pretraining loss seems to have a mistake
{ "login": "dorost1234", "id": 79165106, "node_id": "MDQ6VXNlcjc5MTY1MTA2", "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "gravatar_id": "", "url": "https://api.github.com/users/dorost1234", "html_url": "https://github.com/dorost1234", "followers_url": "https://api.github.com/use...
[]
open
false
null
[]
null
0
1,617,288,595,000
1,617,297,044,000
null
NONE
null
Dear @patrickvonplaten The documentation of T5 for computing loss of pretraining seems to have a mistake, where it talks on the loss formulation: https://huggingface.co/transformers/model_doc/t5.html?highlight=decoder_input_ids ``` input_ids = tokenizer('The <extra_id_0> walks in <extra_id_1> park', return_te...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11016
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11016/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11016/comments
https://api.github.com/repos/huggingface/transformers/issues/11016/events
https://github.com/huggingface/transformers/issues/11016
848,490,060
MDU6SXNzdWU4NDg0OTAwNjA=
11,016
Add new CANINE model
{ "login": "stefan-it", "id": 20651387, "node_id": "MDQ6VXNlcjIwNjUxMzg3", "avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stefan-it", "html_url": "https://github.com/stefan-it", "followers_url": "https://api.github.com/users/...
[ { "id": 1843244711, "node_id": "MDU6TGFiZWwxODQzMjQ0NzEx", "url": "https://api.github.com/repos/huggingface/transformers/labels/New%20model", "name": "New model", "color": "fbca04", "default": false, "description": "" } ]
open
false
null
[]
null
0
1,617,285,201,000
1,617,286,936,000
null
COLLABORATOR
null
# 🌟 New model addition ## Model description Google recently proposed a new **C**haracter **A**rchitecture with **N**o tokenization **I**n **N**eural **E**ncoders architecture (CANINE). Not only the title is exciting: > Pipelined NLP systems have largely been superseded by end-to-end neural modeling, yet nearl...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11014
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11014/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11014/comments
https://api.github.com/repos/huggingface/transformers/issues/11014/events
https://github.com/huggingface/transformers/issues/11014
848,375,119
MDU6SXNzdWU4NDgzNzUxMTk=
11,014
OSError: Can't load config for '/content/wav2vec2-large-xlsr-asr-demo'. Make sure that:
{ "login": "Kowsher", "id": 16461536, "node_id": "MDQ6VXNlcjE2NDYxNTM2", "avatar_url": "https://avatars.githubusercontent.com/u/16461536?v=4", "gravatar_id": "", "url": "https://api.github.com/users/Kowsher", "html_url": "https://github.com/Kowsher", "followers_url": "https://api.github.com/users/Kowshe...
[]
open
false
null
[]
null
1
1,617,275,957,000
1,617,295,152,000
null
NONE
null
I'm using pip install transformers==4.4.2 After completing the training process of ASR I can not read the trained file from my local storage. Although the path is right. But can read from hugging face model = Wav2Vec2ForCTC.from_pretrained("/content/wav2vec2-large-xlsr-asr-demo").to("cuda") The error: OS...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11013
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11013/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11013/comments
https://api.github.com/repos/huggingface/transformers/issues/11013/events
https://github.com/huggingface/transformers/issues/11013
848,349,453
MDU6SXNzdWU4NDgzNDk0NTM=
11,013
use `BaseModelOutput` as common interface for all different `BaseModelOutputWith*`?
{ "login": "JoanFM", "id": 19825685, "node_id": "MDQ6VXNlcjE5ODI1Njg1", "avatar_url": "https://avatars.githubusercontent.com/u/19825685?v=4", "gravatar_id": "", "url": "https://api.github.com/users/JoanFM", "html_url": "https://github.com/JoanFM", "followers_url": "https://api.github.com/users/JoanFM/fo...
[ { "id": 2648621985, "node_id": "MDU6TGFiZWwyNjQ4NjIxOTg1", "url": "https://api.github.com/repos/huggingface/transformers/labels/Feature%20request", "name": "Feature request", "color": "FBCA04", "default": false, "description": "Request for a new feature" } ]
open
false
null
[]
null
0
1,617,273,662,000
1,617,299,016,000
null
NONE
null
Hello team, I have been taking a look at the `different` output models from your models, and I wonder if it would make sense to inherit all the `BaseModelOutputWithPool` and all the other flavours of modeling output, instead of using `ModelOutput`. https://github.com/huggingface/transformers/blob/c301c26370dfa48f...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11012
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11012/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11012/comments
https://api.github.com/repos/huggingface/transformers/issues/11012/events
https://github.com/huggingface/transformers/pull/11012
848,275,273
MDExOlB1bGxSZXF1ZXN0NjA3MjM3OTQ4
11,012
Add multi-class, multi-label and regression to transformers
{ "login": "abhishekkrthakur", "id": 1183441, "node_id": "MDQ6VXNlcjExODM0NDE=", "avatar_url": "https://avatars.githubusercontent.com/u/1183441?v=4", "gravatar_id": "", "url": "https://api.github.com/users/abhishekkrthakur", "html_url": "https://github.com/abhishekkrthakur", "followers_url": "https://ap...
[]
open
false
null
[]
null
0
1,617,268,019,000
1,617,368,441,000
null
MEMBER
null
null
null
https://api.github.com/repos/huggingface/transformers/issues/11050
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11050/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11050/comments
https://api.github.com/repos/huggingface/transformers/issues/11050/events
https://github.com/huggingface/transformers/pull/11050
849,866,711
MDExOlB1bGxSZXF1ZXN0NjA4NTM5Nzgw
11,050
accelerate scripts for question answering with no trainer
{ "login": "theainerd", "id": 15798640, "node_id": "MDQ6VXNlcjE1Nzk4NjQw", "avatar_url": "https://avatars.githubusercontent.com/u/15798640?v=4", "gravatar_id": "", "url": "https://api.github.com/users/theainerd", "html_url": "https://github.com/theainerd", "followers_url": "https://api.github.com/users/...
[]
open
false
null
[]
null
0
1,617,539,967,000
1,617,539,967,000
null
NONE
null
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this w...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11049
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11049/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11049/comments
https://api.github.com/repos/huggingface/transformers/issues/11049/events
https://github.com/huggingface/transformers/pull/11049
849,737,172
MDExOlB1bGxSZXF1ZXN0NjA4NDQyMTM1
11,049
[docs] fix xref to `PreTrainedModel.generate`
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/fo...
[]
open
false
null
[]
null
0
1,617,482,996,000
1,617,483,306,000
null
COLLABORATOR
null
This PR partially resolves the issue raised in https://github.com/huggingface/transformers/issues/9202 I spent quite some time to try to figure out how to get sphinx to figure out the inheritance so that it could cross-reference inherited methods, but it can't even handle mixins it seems. i.e. it can't resolve: `tra...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11048
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11048/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11048/comments
https://api.github.com/repos/huggingface/transformers/issues/11048/events
https://github.com/huggingface/transformers/pull/11048
849,734,674
MDExOlB1bGxSZXF1ZXN0NjA4NDQwMjYz
11,048
fix incorrect case for s|Pretrained|PreTrained|
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/fo...
[]
open
false
null
[]
null
0
1,617,482,010,000
1,617,482,010,000
null
COLLABORATOR
null
This PR fixes incorrect `Pretrained` case for 2 cases: ``` git-replace PretrainedTokenizer PreTrainedTokenizer git-replace transformers.PretrainedModel transformers.PreTrainedModel ``` there might be other cases to fix, but these stood out. @sgugger
null
null
https://api.github.com/repos/huggingface/transformers/issues/11047
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11047/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11047/comments
https://api.github.com/repos/huggingface/transformers/issues/11047/events
https://github.com/huggingface/transformers/issues/11047
849,604,791
MDU6SXNzdWU4NDk2MDQ3OTE=
11,047
Use Bert model without pretrained weights
{ "login": "avinashsai", "id": 22453634, "node_id": "MDQ6VXNlcjIyNDUzNjM0", "avatar_url": "https://avatars.githubusercontent.com/u/22453634?v=4", "gravatar_id": "", "url": "https://api.github.com/users/avinashsai", "html_url": "https://github.com/avinashsai", "followers_url": "https://api.github.com/use...
[]
closed
false
null
[]
null
2
1,617,436,513,000
1,617,451,672,000
null
NONE
null
Hi, I wanted to train a Bert classifier from scratch without any pretrained weights. It has to be randomly initialized and trained. Example: ``` bert_base_model = BertForSequenceClassification() trainer = Trainer(model=bert_base_model, args=training_args, train_dataset=tra...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11046
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11046/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11046/comments
https://api.github.com/repos/huggingface/transformers/issues/11046/events
https://github.com/huggingface/transformers/issues/11046
849,568,459
MDU6SXNzdWU4NDk1Njg0NTk=
11,046
Potential incorrect application of layer norm in BlenderbotSmallDecoder
{ "login": "sougata-ub", "id": 59206549, "node_id": "MDQ6VXNlcjU5MjA2NTQ5", "avatar_url": "https://avatars.githubusercontent.com/u/59206549?v=4", "gravatar_id": "", "url": "https://api.github.com/users/sougata-ub", "html_url": "https://github.com/sougata-ub", "followers_url": "https://api.github.com/use...
[]
open
false
null
[]
null
0
1,617,421,052,000
1,617,421,052,000
null
NONE
null
In BlenderbotSmallDecoder, layer norm is applied only on the token embeddings, and not on the hidden_states, whereas in the BlenderbotSmallEncoder, layer norm is applied after adding the input_embeds and positional embeds BlenderbotSmallEncoder: `hidden_states = inputs_embeds + embed_pos` `hidden_states = self.la...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11045
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11045/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11045/comments
https://api.github.com/repos/huggingface/transformers/issues/11045/events
https://github.com/huggingface/transformers/issues/11045
849,544,374
MDU6SXNzdWU4NDk1NDQzNzQ=
11,045
Multi-GPU seq2seq example evaluation significantly slower than legacy example evaluation
{ "login": "PeterAJansen", "id": 3813268, "node_id": "MDQ6VXNlcjM4MTMyNjg=", "avatar_url": "https://avatars.githubusercontent.com/u/3813268?v=4", "gravatar_id": "", "url": "https://api.github.com/users/PeterAJansen", "html_url": "https://github.com/PeterAJansen", "followers_url": "https://api.github.com...
[]
open
false
null
[]
null
0
1,617,411,144,000
1,617,411,144,000
null
NONE
null
### Who can help @patil-suraj @sgugger Models: T5 ## Information I've been doing multi-GPU evaluation for some weeks using a Transformers pull from Feb 12th, just using the example scripts for training/evaluating custom datasets (specifically `run_distributed_eval.py` , though that seq2seq example is now ...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11044
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11044/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11044/comments
https://api.github.com/repos/huggingface/transformers/issues/11044/events
https://github.com/huggingface/transformers/issues/11044
849,529,761
MDU6SXNzdWU4NDk1Mjk3NjE=
11,044
[DeepSpeed] ZeRO stage 3 integration: getting started and issues
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/fo...
[ { "id": 2659267025, "node_id": "MDU6TGFiZWwyNjU5MjY3MDI1", "url": "https://api.github.com/repos/huggingface/transformers/labels/DeepSpeed", "name": "DeepSpeed", "color": "4D34F7", "default": false, "description": "" } ]
open
false
{ "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github.com/users/stas00/fo...
[ { "login": "stas00", "id": 10676103, "node_id": "MDQ6VXNlcjEwNjc2MTAz", "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "gravatar_id": "", "url": "https://api.github.com/users/stas00", "html_url": "https://github.com/stas00", "followers_url": "https://api.github...
null
0
1,617,406,842,000
1,617,497,735,000
null
COLLABORATOR
null
**[This is not yet alive, preparing for the release, so please ignore for now]** While we are waiting for deespeed to make a new release and then merge the PR, you can try `pip install -e .` in these 2 branches: https://github.com/stas00/DeepSpeed/tree/zero3-everything https://github.com/stas00/transformers/tree...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11043
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11043/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11043/comments
https://api.github.com/repos/huggingface/transformers/issues/11043/events
https://github.com/huggingface/transformers/issues/11043
849,499,734
MDU6SXNzdWU4NDk0OTk3MzQ=
11,043
Can't load model to estimater
{ "login": "gwc4github", "id": 3164663, "node_id": "MDQ6VXNlcjMxNjQ2NjM=", "avatar_url": "https://avatars.githubusercontent.com/u/3164663?v=4", "gravatar_id": "", "url": "https://api.github.com/users/gwc4github", "html_url": "https://github.com/gwc4github", "followers_url": "https://api.github.com/users...
[]
open
false
null
[]
null
0
1,617,400,304,000
1,617,400,304,000
null
NONE
null
I was trying to follow the Sagemaker instructions [here](https://docs.aws.amazon.com/sagemaker/latest/dg/deploy-model.html) to load the model I just trained and test an estimation. I get the error message: NotImplementedError: Creating model with HuggingFace training job is not supported. Can someone share some s...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11042
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11042/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11042/comments
https://api.github.com/repos/huggingface/transformers/issues/11042/events
https://github.com/huggingface/transformers/issues/11042
849,274,362
MDU6SXNzdWU4NDkyNzQzNjI=
11,042
[LXMERT] Unclear what img_tensorize does with color spaces
{ "login": "hivestrung", "id": 27841209, "node_id": "MDQ6VXNlcjI3ODQxMjA5", "avatar_url": "https://avatars.githubusercontent.com/u/27841209?v=4", "gravatar_id": "", "url": "https://api.github.com/users/hivestrung", "html_url": "https://github.com/hivestrung", "followers_url": "https://api.github.com/use...
[]
open
false
null
[]
null
0
1,617,376,377,000
1,617,376,507,000
null
NONE
null
## Environment info - `transformers` version: Not using transformers directly, I'm loading a model "unc-nlp/frcnn-vg-finetuned" - Platform: MacOS - Python version: 3.8 - PyTorch version (GPU?): 1.6.0, no GPU - Tensorflow version (GPU?): don't have - Using GPU in script?: no - Using distributed or parallel set-...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11041
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11041/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11041/comments
https://api.github.com/repos/huggingface/transformers/issues/11041/events
https://github.com/huggingface/transformers/pull/11041
849,269,684
MDExOlB1bGxSZXF1ZXN0NjA4MDcxNjc1
11,041
wav2vec2 converter: create the proper vocab.json while converting fairseq wav2vec2 finetuned model
{ "login": "cceyda", "id": 15624271, "node_id": "MDQ6VXNlcjE1NjI0Mjcx", "avatar_url": "https://avatars.githubusercontent.com/u/15624271?v=4", "gravatar_id": "", "url": "https://api.github.com/users/cceyda", "html_url": "https://github.com/cceyda", "followers_url": "https://api.github.com/users/cceyda/fo...
[]
open
false
null
[]
null
0
1,617,375,854,000
1,617,377,521,000
null
CONTRIBUTOR
null
# What does this PR do? While converting a finetuned wav2vec2 model we also need to convert the related dictionary `dict.ltr.txt` to hugging face `vocab.json` format. If a `dict_path` is specified: - Creates&saves the necessary vocab.json file - Modifies config file special token ids and vocab size accordin...
null
null
https://api.github.com/repos/huggingface/transformers/issues/11040
https://api.github.com/repos/huggingface/transformers
https://api.github.com/repos/huggingface/transformers/issues/11040/labels{/name}
https://api.github.com/repos/huggingface/transformers/issues/11040/comments
https://api.github.com/repos/huggingface/transformers/issues/11040/events
https://github.com/huggingface/transformers/issues/11040
849,265,615
MDU6SXNzdWU4NDkyNjU2MTU=
11,040
max_length in beam_search() and group_beam_search() does not consider beam_scorer.max_length
{ "login": "GeetDsa", "id": 13940397, "node_id": "MDQ6VXNlcjEzOTQwMzk3", "avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4", "gravatar_id": "", "url": "https://api.github.com/users/GeetDsa", "html_url": "https://github.com/GeetDsa", "followers_url": "https://api.github.com/users/GeetDs...
[]
open
false
null
[]
null
0
1,617,375,392,000
1,617,375,452,000
null
CONTRIBUTOR
null
## Environment info <!-- You can run the command `transformers-cli env` and copy-and-paste its output below. Don't forget to fill out the missing fields in that output! --> - `transformers` version: - Platform: 4.3.2 - Python version: 3.8.5 - PyTorch version (GPU?): 1.8.0 - Using GPU in script?: No - Us...
null
null
End of preview.