Search is not available for this dataset
input_ids sequencelengths 8.19k 8.19k |
|---|
[
128000,
9564,
264,
97094,
596,
449,
264,
7878,
86824,
11,
323,
279,
2132,
6945,
15109,
813,
19557,
64300,
382,
28487,
20568,
374,
220,
1627,
11,
568,
706,
264,
330,
67,
620,
11888,
4791,
36399,
2683,
1359,
323,
568,
29910,
813,
24804,
... |
[
128000,
323,
34234,
2391,
813,
1566,
1060,
304,
5274,
13,
61651,
664,
12225,
76,
531,
11,
279,
1510,
16286,
10461,
13015,
11,
1071,
420,
2046,
430,
14409,
753,
6661,
6688,
279,
13653,
1614,
8041,
311,
9407,
279,
4410,
8715,
45704,
430,
... |
[
128000,
15217,
5955,
271,
2437,
320,
44,
8,
1389,
2206,
10616,
92356,
9580,
11,
578,
5250,
370,
1094,
10517,
13131,
3039,
11,
18732,
295,
1122,
10637,
11,
61834,
13131,
3039,
11,
12140,
94937,
10637,
11,
20585,
10637,
11,
41556,
4988,
1... |
[
128000,
323,
4148,
7902,
2085,
1701,
279,
7074,
11,
323,
279,
15293,
374,
1101,
2561,
304,
279,
5740,
27031,
439,
60598,
323,
27621,
12706,
382,
3947,
596,
264,
15326,
6727,
315,
13939,
56020,
311,
21546,
1990,
3766,
323,
1828,
3673,
11... |
[
128000,
64469,
30454,
369,
12007,
120251,
969,
97148,
40494,
1995,
765,
78118,
65090,
1169,
26729,
8201,
10013,
753,
97148,
40494,
1995,
6944,
10736,
10815,
311,
502,
36151,
753,
5274,
7665,
586,
36151,
311,
387,
8667,
1022,
555,
220,
508,
... |
[128000,315,31376,40373,279,2237,315,7432,17,304,279,16975,574,220,11209,604,76,382,5159,388,3309,27(...TRUNCATED) |
[128000,1633,12691,315,7182,1606,358,617,27332,2555,430,19093,775,706,1027,3025,311,22829,1359,3420,(...TRUNCATED) |
[128000,16,13,2052,2249,822,2816,271,3627,59711,10810,59711,374,1825,2592,2447,16383,1234,9091,220,1(...TRUNCATED) |
[128000,4401,709,311,279,13116,323,63473,50030,279,30673,12466,27474,13,99366,30373,1206,771,291,11,(...TRUNCATED) |
[128000,12166,8,220,20,13,21,12,843,220,21,12,19,220,1187,87,1927,2009,3074,272,13,220,4468,22,11481(...TRUNCATED) |
End of preview. Expand
in Data Studio
OpenWebTextCorpus tokenized for Llama 3
This dataset is a pre-tokenized version of the Skylion007/openwebtext dataset using the llama3 tokenizer. As such, this dataset follows the same licensing as the original openwebtext dataset.
This pre-tokenization is done as a performance optimization for using the openwebtext dataset with a Llama3 model. This dataset was created using SAELens, with the following settings:
- context_size: 8192
- shuffled: true
- begin_batch_token: "bos"
- begin_sequence_token: null
- sequence_separator_token: "eos"
- sae_lens_version: "3.3.0"
The eos token was used as a separator between sequences, since this resulted in the lowest loss experimentally.
Ideally we would like to use the same tokenization settings as used by the original Llama3 training regime, so if
you have information that the original Llama3 was trained using a different tokenization setup, please reach out!
- Downloads last month
- 64