html_url stringlengths 48 51 | title stringlengths 5 268 | comments stringlengths 63 51.8k | body stringlengths 0 36.2k ⌀ | comment_length int64 16 1.52k | text stringlengths 164 54.1k | embeddings sequence |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/4291 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message | Hi @leondz, thanks for reporting.
Indeed, the dataset viewer relies on the dataset being streamable (passing `streaming=True` to `load_dataset`). Whereas most of the datastes are streamable out of the box (thanks to our implementation of streaming), there are still some exceptions.
In particular, in your case, th... | ### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss?
### Owner
Yes | 103 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message
### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. W... | [
-0.5941502452,
0.0154772494,
0.1116633341,
0.2606789768,
-0.152930513,
-0.1338221431,
0.2904717922,
0.2681366503,
0.0844315961,
0.2371055633,
0.0127624674,
0.1334278733,
-0.3677476943,
0.0229738876,
0.0893161371,
-0.2069136202,
-0.0901445225,
0.4387164414,
-0.1709370613,
-0.035... |
https://github.com/huggingface/datasets/issues/4291 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message | Ah, right! The preview is working now, but this explanation is good to know, thank you. I'll prefer formats with random file access supported in datasets.utils.extract in future, and try out this fix for the tarfiles :) | ### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. What did I miss?
### Owner
Yes | 37 | Dataset Viewer issue for strombergnlp/ipm_nel : preview is empty, no error message
### Link
https://huggingface.co/datasets/strombergnlp/ipm_nel/viewer/ipm_nel/train
### Description
The viewer is blank. I tried my best to emulate a dataset with a working viewer, but this one just doesn't seem to want to come up. W... | [
-0.6053087711,
-0.0000201275,
0.0799933597,
0.2253192961,
-0.1501957178,
-0.1735897362,
0.2938115001,
0.2592690885,
0.0822659805,
0.301726222,
0.0461583324,
0.022501966,
-0.3796801269,
0.0506925099,
0.03596,
-0.1937340349,
-0.0915672556,
0.3536639512,
-0.1370633394,
-0.04723620... |
https://github.com/huggingface/datasets/issues/4287 | "NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None | So I managed to solve this by adding a missing `import faiss` in the `@staticmethod` defined in https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/datasets/search.py#L305, triggered from https://github.com/huggingface/datasets/blob/f51b6994db27ea69261ef919fb7775928f9ec10b/src/data... | ## Describe the bug
When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception.
All that assuming that `datasets` is properly... | 102 | "NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None
## Describe the bug
When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(... | [
-0.2677792013,
-0.1930617243,
-0.0503546819,
0.2116217315,
0.2999023795,
0.0705516487,
0.6779098511,
0.3354715109,
0.1389086396,
0.4872550368,
0.1502405405,
0.3312829435,
0.1982175112,
-0.3838970661,
-0.1113043576,
0.026361268,
0.2100861073,
0.2211518586,
0.0979964212,
0.011295... |
https://github.com/huggingface/datasets/issues/4287 | "NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None | Adding here the complete error traceback!
```
Traceback (most recent call last):
File "/home/alvarobartt/lol.py", line 12, in <module>
ds_with_embeddings.add_faiss_index(column='embeddings', device=0) # default `device=None`
File "/home/alvarobartt/.local/lib/python3.9/site-packages/datasets/arrow_datase... | ## Describe the bug
When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(..., device=0)` fails with that exception.
All that assuming that `datasets` is properly... | 66 | "NameError: name 'faiss' is not defined" on `.add_faiss_index` when `device` is not None
## Describe the bug
When using `datasets` to calculate the FAISS indices of a dataset, the exception `NameError: name 'faiss' is not defined` is triggered when trying to calculate those on a device (GPU), so `.add_faiss_index(... | [
-0.2677792013,
-0.1930617243,
-0.0503546819,
0.2116217315,
0.2999023795,
0.0705516487,
0.6779098511,
0.3354715109,
0.1389086396,
0.4872550368,
0.1502405405,
0.3312829435,
0.1982175112,
-0.3838970661,
-0.1113043576,
0.026361268,
0.2100861073,
0.2211518586,
0.0979964212,
0.011295... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | Thanks for reporting, @vblagoje.
Indeed, I noticed some of these issues while reviewing this PR:
- #4259
This is in my TODO list. | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 23 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.1009263843,
0.2100123912,
-0.0518254675,
0.2251940221,
-0.1097796932,
-0.0576233305,
0.2680732906,
0.3371189237,
-0.0792645589,
0.2063141167,
0.1244209558,
0.5698289275,
0.4415481985,
0.3652537167,
-0.0546431914,
-0.1742747575,
0.3567141891,
0.0840137601,
0.1305603534,
-0.09... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | On the other hand, I am not sure if we should always preserve the original nested structure. I think we should also consider other factors as convenience or consistency.
For example, other datasets also flatten "question.stem" into "question":
- ai2_arc:
```python
question = data["question"]["stem"]
choice... | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 132 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.1097743586,
0.1326161772,
-0.0441396423,
0.251501888,
-0.2379210591,
-0.0238394011,
0.2634883523,
0.3610416353,
0.006400961,
0.1764746308,
0.096433647,
0.4849393666,
0.4549894035,
0.4911120832,
-0.148104012,
-0.160447374,
0.3868280947,
0.0276735201,
0.2539488673,
-0.03565153... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | @albertvillanova I agree that we should be consistent. In the last month, I have come across tons of code that deals with OpenBookQA and CommonSenseQA and all of that code relies on the original data format structure. We can't expect users to adopt HF Datasets if we arbitrarily change the structure of the format just b... | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 107 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.0696031824,
0.2296478003,
-0.0100900801,
0.0402288064,
-0.1834874302,
-0.1042137891,
0.319383949,
0.3728416562,
-0.0084602879,
0.1532329619,
0.0812204257,
0.3126558065,
0.5082780123,
0.2665573657,
-0.1158585623,
-0.245972991,
0.3876905143,
0.1590896994,
0.0666596964,
-0.1644... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | I'm opening a PR that adds the missing fields.
Let's agree on the feature structure: @lhoestq @mariosasko @polinaeterna | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 18 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.1227659434,
0.1294834018,
-0.0531327948,
0.2436132431,
-0.1115282178,
-0.0573373921,
0.1977688372,
0.2891437709,
-0.115660876,
0.2128254175,
0.1821909398,
0.4630895257,
0.37812379,
0.3454271257,
-0.0517650545,
-0.3028847873,
0.3833729029,
0.1199520379,
0.2903484404,
-0.00973... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | IMO we should always try to preserve the original structure unless there is a good reason not to (and I don't see one in this case). | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 26 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.1161505356,
0.1762668341,
-0.0998141393,
0.2082637399,
-0.117515862,
-0.067944631,
0.2634468973,
0.3842664659,
-0.0682339072,
0.2268134803,
0.0829303637,
0.5433103442,
0.3696924746,
0.374489814,
-0.0723104253,
-0.1654941887,
0.2737470865,
0.0790645778,
0.0742814839,
-0.07375... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | I agree with @mariosasko . The transition to the original format could be done in one PR for the next minor release, clearly documenting all dataset changes just as @albertvillanova outlined them above and perhaps even providing a per dataset util method to convert the new valid format to the old for backward compatibi... | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 81 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.1998205185,
0.2370988578,
-0.0784825236,
0.0383832976,
-0.0463750511,
-0.1121220514,
0.261579901,
0.4792057276,
-0.0928937942,
0.1985219419,
0.0985915437,
0.5298961401,
0.2929408848,
0.4118284881,
-0.169930324,
-0.23902376,
0.3220477998,
0.0963060707,
0.0373929888,
-0.029769... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | I don't have a strong opinion on this, besides the fact that whatever decision we agree on, should be applied to all datasets.
There is always the tension between:
- preserving each dataset original structure (which has the advantage of not forcing users to learn other structure for the same dataset),
- and on th... | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 161 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.244081974,
0.2624767125,
-0.0215143748,
0.2153030038,
-0.2133324444,
-0.063490659,
0.3279465735,
0.3787684143,
0.0790012851,
0.2448695004,
0.0920588821,
0.5058540702,
0.2577792108,
0.5256162882,
-0.0657963902,
-0.0864537507,
0.3222070932,
-0.0016847457,
0.1259201318,
-0.0332... |
https://github.com/huggingface/datasets/issues/4276 | OpenBookQA has missing and inconsistent field names | @albertvillanova, thanks for the detailed answer and the new perspectives. I understand the friction for the best design approach much better now. Ultimately, it is essential to include all the missing fields and the correct data first. Whatever approach is determined to be optimal is important but not as crucial once... | ## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'fact1': row['fact1'],
- 'humanScore': row['humanSc... | 69 | OpenBookQA has missing and inconsistent field names
## Describe the bug
OpenBookQA implementation is inconsistent with the original dataset.
We need to:
1. The dataset field [question][stem] is flattened into question_stem. Unflatten it to match the original format.
2. Add missing additional fields:
- 'f... | [
-0.0945750922,
0.164955914,
-0.1062555909,
0.1807774007,
-0.1605589986,
-0.1824282259,
0.2929793894,
0.3721559942,
-0.0281362962,
0.237318486,
0.0815383717,
0.4941643178,
0.3443228602,
0.3875853419,
-0.1219676882,
-0.1653728038,
0.1791916639,
0.1058904976,
0.0738654286,
-0.0472... |
https://github.com/huggingface/datasets/issues/4271 | A typo in docs of datasets.disable_progress_bar | Hi! Thanks for catching and reporting the typo, a PR has been opened to fix it :) | ## Describe the bug
in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable". | 17 | A typo in docs of datasets.disable_progress_bar
## Describe the bug
in the docs of V2.1.0 datasets.disable_progress_bar, we should replace "enable" with "disable".
Hi! Thanks for catching and reporting the typo, a PR has been opened to fix it :) | [
-0.2229197919,
0.1155907437,
-0.1957840174,
-0.2332064658,
0.1984051019,
-0.018780956,
0.2855718732,
0.2007148415,
-0.1886951476,
0.3785544336,
0.2363237292,
0.3476401567,
0.1750877202,
0.3231857717,
-0.1807082295,
0.05160008,
0.0662440285,
0.2353480756,
-0.1192478538,
0.097619... |
https://github.com/huggingface/datasets/issues/4268 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered | It would help a lot to be able to preview the dataset - I'd like to see if the pronunciations are in the dataset, eg. for ["word"](https://en.wiktionary.org/wiki/word),
Pronunciation
([Received Pronunciation](https://en.wikipedia.org/wiki/Received_Pronunciation)) [IPA](https://en.wiktionary.org/wiki/Wiktionary:Inte... | ## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
## Expected results
A clear and concise description of the expected results... | 38 | error downloading bigscience-catalogue-lm-data/lm_en_wiktionary_filtered
## Describe the bug
Error generated when attempting to download dataset
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("bigscience-catalogue-lm-data/lm_en_wiktionary_filtered")
```
#... | [
-0.2923785746,
-0.0414035209,
-0.1572302878,
0.2814292014,
0.0499909259,
-0.0285517592,
0.1625797749,
0.5453563929,
0.2865177393,
0.0777813271,
-0.168789044,
0.2441957742,
-0.1849417686,
-0.0490182601,
-0.0428375825,
0.0211599749,
-0.0528010242,
-0.1682505608,
-0.2153728455,
-0... |
End of preview. Expand in Data Studio
No dataset card yet
- Downloads last month
- 29