id int64 | text string | metadata dict | line_start_n_end_idx dict | quality_signals dict | eai_taxonomy dict | pid string |
|---|---|---|---|---|---|---|
-5,963,930,584,327,863,000 | Printing Test
Printing Test
Once the BarTender Set-up and Salesforce Set-up are completed, the next step is to test the print job from a record or a List view. Example: Record button of the “Bartender Print with user input flow” Navigate to the record. Click on the newly created Print button....
Printing Test
Butt... | {
"url": "https://ascenterp.com/ascent-label-anything-kb/tag/salesforce-flow/",
"source_domain": "ascenterp.com",
"snapshot_id": "CC-MAIN-2024-38",
"warc_metadata": {
"Content-Length": "195693",
"Content-Type": "application/http; msgtype=response",
"WARC-Block-Digest": "sha1:KBVISQ7HZWKNFBSM3AUKISC5... | {
"line_start_idx": [
0,
14,
15,
29,
30,
301,
315,
316,
333,
334
],
"line_end_idx": [
14,
15,
29,
30,
301,
315,
316,
333,
334,
605
]
} | {
"red_pajama_v2": {
"ccnet_original_length": 605,
"ccnet_original_nlines": 9,
"rps_doc_curly_bracket": 0,
"rps_doc_ldnoobw_words": 0,
"rps_doc_lorem_ipsum": 0,
"rps_doc_stop_word_fraction": 0.3739837408065796,
"rps_doc_ut1_blacklist": 0,
"rps_doc_frac_all_caps_words": 0,
"rps_doc_... | {
"free_decimal_correspondence": {
"primary": {
"code": "005.1",
"labels": {
"level_1": "General works, books and libraries, information sciences",
"level_2": "",
"level_3": "Computer programming"
}
},
"secondary": {
"code": "658.403",
"labels": {
... | ece34605c058195ed03b4d393ef1a36c |
5,073,542,710,008,819,000 | ++ed by:
DRTECH TIMB AWNCORP ABRAXXA
4 PAUSE users
5 non-PAUSE users.
Clinton Gormley
NAME
Elastic::Model::TypeMap::Structured - Type maps for MooseX::Types::Structured
VERSION
version 0.52
DESCRIPTION
Elastic::Model::TypeMap::Structured provides mapping, inflation and deflation for the MooseX::Types::Structure... | {
"url": "https://metacpan.org/pod/Elastic::Model::TypeMap::Structured",
"source_domain": "metacpan.org",
"snapshot_id": "crawl=CC-MAIN-2018-26",
"warc_metadata": {
"Content-Length": "47669",
"Content-Type": "application/http; msgtype=response",
"WARC-Block-Digest": "sha1:QE5XRNSF533P2KXMZ64MMF46O4B... | {
"line_start_idx": [
0,
9,
37,
38,
52,
71,
72,
88,
89,
94,
95,
173,
174,
182,
183,
196,
197,
209,
210,
403,
404,
410,
411,
420,
421,
687,
688,
694,
695,
961,
962,
1000,
1001,
1007,
... | {
"red_pajama_v2": {
"ccnet_original_length": 2318,
"ccnet_original_nlines": 64,
"rps_doc_curly_bracket": 0.006039690226316452,
"rps_doc_ldnoobw_words": 0,
"rps_doc_lorem_ipsum": 0,
"rps_doc_stop_word_fraction": 0.25267666578292847,
"rps_doc_ut1_blacklist": 0,
"rps_doc_frac_all_caps_wo... | {
"free_decimal_correspondence": {
"primary": {
"code": "005.1",
"labels": {
"level_1": "General works, books and libraries, information sciences",
"level_2": "",
"level_3": "Computer programming"
}
},
"secondary": {
"code": "005.74",
"labels": {
... | ece34605c058195ed03b4d393ef1a36c |
-8,230,318,770,013,374,000 | 2
I'm guessing the answer is no, but I was wondering if there was anyway to tell innodb to not store fetched pages in the buffer pool?
The reason for looking at doing this is check summing tables. I'd like to minimize the effects of trashing the useful cached data.
1 Answer 1
2
Months later I stumbled across the e... | {
"url": "https://dba.stackexchange.com/questions/11742/sql-no-cache-for-innodb-buffer-pool",
"source_domain": "dba.stackexchange.com",
"snapshot_id": "CC-MAIN-2024-10",
"warc_metadata": {
"Content-Length": "154240",
"Content-Type": "application/http; msgtype=response",
"WARC-Block-Digest": "sha1:ZA... | {
"line_start_idx": [
0,
2,
3,
136,
137,
268,
269,
280,
281,
283,
284,
376,
377,
546,
547,
751,
752,
1017,
1018,
1322,
1323,
1325,
1389,
1446,
1461,
1486,
1487,
1499,
1500,
1616,
1617
],
"li... | {
"red_pajama_v2": {
"ccnet_original_length": 1707,
"ccnet_original_nlines": 30,
"rps_doc_curly_bracket": 0,
"rps_doc_ldnoobw_words": 0,
"rps_doc_lorem_ipsum": 0,
"rps_doc_stop_word_fraction": 0.44897958636283875,
"rps_doc_ut1_blacklist": 0,
"rps_doc_frac_all_caps_words": 0.01749270968... | {
"free_decimal_correspondence": {
"primary": {
"code": "005.746",
"labels": {
"level_1": "General works, books and libraries, information sciences",
"level_2": "",
"level_3": "Computer programming"
}
},
"secondary": {
"code": "005.452",
"labels": {
... | ece34605c058195ed03b4d393ef1a36c |
781,309,840,277,468,500 | blob: c8def1b7f8fbad81b43ea5c34d1a46cd1b46c085 [file] [log] [blame]
// SPDX-License-Identifier: GPL-2.0-only
/*
* xsave/xrstor support.
*
* Author: Suresh Siddha <suresh.b.siddha@intel.com>
*/
#include <linux/compat.h>
#include <linux/cpu.h>
#include <linux/mman.h>
#include <linux/pkeys.h>
#include <linux/seq_file.h>
#... | {
"url": "https://linux.googlesource.com/linux/kernel/git/tj/cgroup/+/d298b03506d3e161f7492c440babb0bfae35e650/arch/x86/kernel/fpu/xstate.c",
"source_domain": "linux.googlesource.com",
"snapshot_id": "crawl=CC-MAIN-2022-21",
"warc_metadata": {
"Content-Length": "450803",
"Content-Type": "application/htt... | {
"line_start_idx": [
0,
68,
109,
112,
136,
138,
190,
193,
219,
242,
266,
291,
319,
346,
371,
401,
429,
457,
485,
511,
539,
542,
598,
655,
692,
695,
733,
735,
768,
786,
804,
829,
841,
... | {
"red_pajama_v2": {
"ccnet_original_length": 33644,
"ccnet_original_nlines": 1109,
"rps_doc_curly_bracket": 0.005171800032258034,
"rps_doc_ldnoobw_words": 0,
"rps_doc_lorem_ipsum": 0,
"rps_doc_stop_word_fraction": 0.1786838322877884,
"rps_doc_ut1_blacklist": 0,
"rps_doc_frac_all_caps_... | {
"free_decimal_correspondence": {
"primary": {
"code": "004.22",
"labels": {
"level_1": "General works, books and libraries, information sciences",
"level_2": "",
"level_3": "Computers and Computer science"
}
},
"secondary": {
"code": "004.02",
"label... | ece34605c058195ed03b4d393ef1a36c |
-6,266,963,552,481,710,000 | Mastering the Basics: How to Copy and Paste on MacBook with Ease
How to Copy and Paste on MacBook: A Comprehensive Guide
In today's fast-paced world, the ability to quickly copy and paste information is essential for productivity and efficiency. While most people are familiar with the concept of copying and pasting, ... | {
"url": "https://techimperatives.net/mastering-the-basics-how-to-copy-and-paste-on-macbook-with-ease/",
"source_domain": "techimperatives.net",
"snapshot_id": "CC-MAIN-2023-50",
"warc_metadata": {
"Content-Length": "183835",
"Content-Type": "application/http; msgtype=response",
"WARC-Block-Digest":... | {
"line_start_idx": [
0,
65,
66,
122,
123,
617,
618,
631,
632,
664,
665,
1088,
1089,
1121,
1122,
1466,
1467,
1527,
1528,
1811,
1812,
1869,
1870,
1892,
1893,
1906,
2154,
2155,
2168,
2357,
2358,
... | {
"red_pajama_v2": {
"ccnet_original_length": 7651,
"ccnet_original_nlines": 111,
"rps_doc_curly_bracket": 0,
"rps_doc_ldnoobw_words": 0,
"rps_doc_lorem_ipsum": 0,
"rps_doc_stop_word_fraction": 0.37063857913017273,
"rps_doc_ut1_blacklist": 0,
"rps_doc_frac_all_caps_words": 0.0250164605... | {
"free_decimal_correspondence": {
"primary": {
"code": "004.16",
"labels": {
"level_1": "General works, books and libraries, information sciences",
"level_2": "",
"level_3": "Computers and Computer science"
}
},
"secondary": {
"code": "658.40285",
"la... | ece34605c058195ed03b4d393ef1a36c |
-268,431,426,291,249,440 | View Single Post
Old 02-19-2009, 05:45 PM #10
Gideon
Wearer of Pants
Gideon knows the square root of minus one.Gideon knows the square root of minus one.Gideon knows the square root of minus one.Gideon knows the square root of minus one.Gideon knows the square root of minus one.Gideon knows the square root of minus o... | {
"url": "http://www.mobileread.com/forums/showpost.php?p=359841&postcount=10",
"source_domain": "www.mobileread.com",
"snapshot_id": "crawl=CC-MAIN-2014-23",
"warc_metadata": {
"Content-Length": "11596",
"Content-Type": "application/http; msgtype=response",
"WARC-Block-Digest": "sha1:HAK4EL6C2UIXRE... | {
"line_start_idx": [
0,
17,
48,
55,
71,
534,
536,
552,
554,
567,
579,
599,
620,
654,
751,
752,
950,
951,
1018,
1019,
1229,
1230,
1289,
1343,
1363,
1394,
1395,
1415,
1426,
1440,
1483,
1484,
... | {
"red_pajama_v2": {
"ccnet_original_length": 1578,
"ccnet_original_nlines": 36,
"rps_doc_curly_bracket": 0,
"rps_doc_ldnoobw_words": 0,
"rps_doc_lorem_ipsum": 0,
"rps_doc_stop_word_fraction": 0.3667546212673187,
"rps_doc_ut1_blacklist": 0,
"rps_doc_frac_all_caps_words": 0.073878630995... | {
"free_decimal_correspondence": {
"primary": {
"code": "004.6",
"labels": {
"level_1": "General works, books and libraries, information sciences",
"level_2": "",
"level_3": "Computers and Computer science"
}
},
"secondary": {
"code": "-1",
"labels": {... | ece34605c058195ed03b4d393ef1a36c |
-5,768,072,604,473,334,000 | Adding other Admin users
If you are an existing admin you have responsibility of managing the organisation, project or group that you are admin of. If you would like to share the administrator role, you can give other users admin rights via the membership panel.
There is no limit to the number of Admin users an organ... | {
"url": "https://help.lifeqisystem.com/admin-guide/adding-other-admin-users",
"source_domain": "help.lifeqisystem.com",
"snapshot_id": "crawl=CC-MAIN-2020-10",
"warc_metadata": {
"Content-Length": "53848",
"Content-Type": "application/http; msgtype=response",
"WARC-Block-Digest": "sha1:K3RTETBS4ZD7... | {
"line_start_idx": [
0,
25,
26,
264,
265,
371,
372,
498,
499,
611,
612,
656,
657,
724,
725,
745,
746,
816,
817,
877,
878,
1093,
1094,
1123,
1124,
1204,
1205,
1236,
1237
],
"line_end_idx": [
25,... | {
"red_pajama_v2": {
"ccnet_original_length": 1486,
"ccnet_original_nlines": 28,
"rps_doc_curly_bracket": 0,
"rps_doc_ldnoobw_words": 0,
"rps_doc_lorem_ipsum": 0,
"rps_doc_stop_word_fraction": 0.3866666555404663,
"rps_doc_ut1_blacklist": 0,
"rps_doc_frac_all_caps_words": 0.003333329921... | {
"free_decimal_correspondence": {
"primary": {
"code": "005.462",
"labels": {
"level_1": "General works, books and libraries, information sciences",
"level_2": "",
"level_3": "Computer programming"
}
},
"secondary": {
"code": "658.40285",
"labels": {
... | ece34605c058195ed03b4d393ef1a36c |
-3,317,683,279,688,874,000 | Uploaded image for project: 'Qt'
1. Qt
2. QTBUG-22420
Mouse input calculated wrongly on Ubuntu
XMLWordPrintable
Details
• Commits:
fba5fce6723a739aec73ef5184ccb6cc425402fe
Description
If you compile Qt Creator against the current 4.8 branch you will recognize that on Ubuntu 11.10... | {
"url": "https://bugreports.qt.io/browse/QTBUG-22420",
"source_domain": "bugreports.qt.io",
"snapshot_id": "crawl=CC-MAIN-2021-43",
"warc_metadata": {
"Content-Length": "94419",
"Content-Type": "application/http; msgtype=response",
"WARC-Block-Digest": "sha1:GOUHE6V3VR3EMOMHE6DH33QP6C6GZBHK",
"... | {
"line_start_idx": [
0,
33,
41,
58,
59,
100,
101,
122,
123,
135,
136,
151,
198,
199,
217,
218,
564,
565,
687,
688,
856,
857,
877,
878,
984,
985,
1004,
1005,
1024,
1025,
1047,
1084,
1106... | {
"red_pajama_v2": {
"ccnet_original_length": 1434,
"ccnet_original_nlines": 46,
"rps_doc_curly_bracket": 0,
"rps_doc_ldnoobw_words": 0,
"rps_doc_lorem_ipsum": 0,
"rps_doc_stop_word_fraction": 0.290909081697464,
"rps_doc_ut1_blacklist": 0,
"rps_doc_frac_all_caps_words": 0.0045454502105... | {
"free_decimal_correspondence": {
"primary": {
"code": "005.4",
"labels": {
"level_1": "General works, books and libraries, information sciences",
"level_2": "",
"level_3": "Computer programming"
}
},
"secondary": {
"code": "004.02",
"labels": {
... | ece34605c058195ed03b4d393ef1a36c |
2,018,109,378,813,162,800 | 3
$\begingroup$
I am running a regression analysis where my primary interest is to see if the outcome differs by the group (treatment vs. control). However, I have some 80 other clinical and socio-demographic variable that could potentially contribute to the outcome (no literature or theory, selection of these covaria... | {
"url": "https://stats.stackexchange.com/questions/362539/how-to-chose-covariates-to-adjust-for-in-a-regression-analysis",
"source_domain": "stats.stackexchange.com",
"snapshot_id": "CC-MAIN-2024-38",
"warc_metadata": {
"Content-Length": "159972",
"Content-Type": "application/http; msgtype=response",
... | {
"line_start_idx": [
0,
2,
16,
17,
491,
492,
618,
619,
781,
782,
825,
826,
838,
839,
850,
851,
854,
868,
869,
1572,
1573,
2128,
2129,
3025,
3026,
3230,
3231,
3243,
3245,
3506,
3507,
3519,
... | {
"red_pajama_v2": {
"ccnet_original_length": 3727,
"ccnet_original_nlines": 34,
"rps_doc_curly_bracket": 0,
"rps_doc_ldnoobw_words": 0,
"rps_doc_lorem_ipsum": 0,
"rps_doc_stop_word_fraction": 0.4191780686378479,
"rps_doc_ut1_blacklist": 0,
"rps_doc_frac_all_caps_words": 0.030136989429... | {
"free_decimal_correspondence": {
"primary": {
"code": "519.5",
"labels": {
"level_1": "Science and Natural history",
"level_2": "Mathematics",
"level_3": "Probabilities; or, Mathematical statistics"
}
},
"secondary": {
"code": "-1",
"labels": {
... | ece34605c058195ed03b4d393ef1a36c |
-6,982,023,364,923,333,000 | Razer Mamba Wireless Bedienungsanleitung
Razer Mamba Wireless
8.1 · 1
PDF Bedienungsanleitung
· 20 Seiten
Englisch
BedienungsanleitungRazer Mamba Wireless
Razer Mamba_MultiSku_MasterGuide
Rev: 090102
Open Size : 204mm x 81mm with customed diecut
Diecut : Razer Blade box
Printing : 2C x 2C (Grayscale & Pantone 80... | {
"url": "https://www.bedienungsanleitu.ng/razer/mamba-wireless/anleitung",
"source_domain": "www.bedienungsanleitu.ng",
"snapshot_id": "CC-MAIN-2023-14",
"warc_metadata": {
"Content-Length": "577118",
"Content-Type": "application/http; msgtype=response",
"WARC-Block-Digest": "sha1:6ZEXY5WEGL76R5W7Y... | {
"line_start_idx": [
0,
41,
42,
63,
71,
95,
108,
117,
157,
158,
191,
192,
204,
205,
251,
252,
277,
278,
325,
326,
359,
360,
393,
394,
411,
412,
474,
475,
521,
522,
567,
568,
592,
59... | {
"red_pajama_v2": {
"ccnet_original_length": 38170,
"ccnet_original_nlines": 576,
"rps_doc_curly_bracket": 0,
"rps_doc_ldnoobw_words": 0,
"rps_doc_lorem_ipsum": 0,
"rps_doc_stop_word_fraction": 0.281338632106781,
"rps_doc_ut1_blacklist": 0,
"rps_doc_frac_all_caps_words": 0.04537719860... | {
"free_decimal_correspondence": {
"primary": {
"code": "004.16",
"labels": {
"level_1": "General works, books and libraries, information sciences",
"level_2": "",
"level_3": "Computers and Computer science"
}
},
"secondary": {
"code": "794.8",
"labels... | ece34605c058195ed03b4d393ef1a36c |
-5,953,935,742,703,541,000 | Programs
Multiple String Input In Java Using Scanner [With Coding Example]
Introduction
In java.util package, the scanner is one of the classes that help in collecting multiple inputs of the primitive types such as double, integer, strings, etc. Though it is not an efficient way of reading inputs in a Java program w... | {
"url": "https://prod-eks-app-alb-1037681640.ap-south-1.elb.amazonaws.com/blog/multiple-string-input-in-java-using-scanner/",
"source_domain": "prod-eks-app-alb-1037681640.ap-south-1.elb.amazonaws.com",
"snapshot_id": "crawl=CC-MAIN-2022-33",
"warc_metadata": {
"Content-Length": "389803",
"Content-Type... | {
"line_start_idx": [
0,
9,
10,
76,
77,
90,
91,
559,
560,
609,
610,
808,
809,
823,
841,
859,
879,
895,
913,
932,
959,
981,
982,
1011,
1012,
1382,
1383,
1431,
1432,
1457,
1458,
1507,
1508... | {
"red_pajama_v2": {
"ccnet_original_length": 8843,
"ccnet_original_nlines": 309,
"rps_doc_curly_bracket": 0.001583170029334724,
"rps_doc_ldnoobw_words": 0,
"rps_doc_lorem_ipsum": 0,
"rps_doc_stop_word_fraction": 0.3002282977104187,
"rps_doc_ut1_blacklist": 0,
"rps_doc_frac_all_caps_wo... | {
"free_decimal_correspondence": {
"primary": {
"code": "005.1332",
"labels": {
"level_1": "General works, books and libraries, information sciences",
"level_2": "",
"level_3": "Computer programming"
}
},
"secondary": {
"code": "005.133",
"labels": {
... | ece34605c058195ed03b4d393ef1a36c |
7,259,222,761,373,014,000 | Cannot update packages with zypper
Hi,
I ran zypper lr -d && sudo zypper up and it produces the following result:
https://paste.opensuse.org/pastes/71a52ea36988
I also tried running sudo zypper ref && sudo zypper -vvv up --allow-vendor-change
Can anyone help figure out why zypper won’t update?
Update Tumbleweed w... | {
"url": "https://forums.opensuse.org/t/cannot-update-packages-with-zypper/171971",
"source_domain": "forums.opensuse.org",
"snapshot_id": "CC-MAIN-2024-10",
"warc_metadata": {
"Content-Length": "21080",
"Content-Type": "application/http; msgtype=response",
"WARC-Block-Digest": "sha1:M4DTMFQTE42JKK4... | {
"line_start_idx": [
0,
35,
36,
40,
41,
116,
117,
164,
165,
247,
248,
300,
301,
349,
350,
373,
374,
478,
479,
571,
572,
637,
638,
692,
693,
740,
741,
831,
832,
1105,
1106,
1431,
1432
... | {
"red_pajama_v2": {
"ccnet_original_length": 1438,
"ccnet_original_nlines": 32,
"rps_doc_curly_bracket": 0,
"rps_doc_ldnoobw_words": 0,
"rps_doc_lorem_ipsum": 0,
"rps_doc_stop_word_fraction": 0.39228296279907227,
"rps_doc_ut1_blacklist": 0,
"rps_doc_frac_all_caps_words": 0.02250804007... | {
"free_decimal_correspondence": {
"primary": {
"code": "005.455",
"labels": {
"level_1": "General works, books and libraries, information sciences",
"level_2": "",
"level_3": "Computer programming"
}
},
"secondary": {
"code": "004.02",
"labels": {
... | ece34605c058195ed03b4d393ef1a36c |
💻 EAI-Taxonomy Code w/ DCLM (100B sample)
A 100 billion token sample of high-quality code curated from web data using taxonomy-based filtering.
🎯 Dataset Overview
This dataset is part of the Essential-Web project, which introduces a new paradigm for dataset curation using expressive metadata and simple semantic filters. Unlike traditional code datasets that require complex domain-specific pipelines, our approach leverages a 12-category taxonomy to efficiently identify and extract high-quality code data.
💡 EAI-Taxonomy Code w/ DCLM (100B tokens): Documents targeting code that exhibit intermediate to advanced reasoning, combined with the DCLM classifier to filter for instruction-dense documents. Also includes mathematics content (51 - Mathematics) to match the scope of existing code datasets.
🏆 Performance
Our taxonomy-based approach achieves competitive results with significantly less curation effort:
| Dataset | HumanEval+ | MBPP+ | MMLU-CS | Curation Complexity |
|---|---|---|---|---|
| DCLM-baseline | 28.0% | 45.5% | 32.0% | General web filtering |
| OpenCoder FW | 26.2% | 45.8% | 27.7% | Complex domain pipeline |
| EAI-Taxonomy Code | 27.4% | 46.6% | 29.0% | Simple semantic filter |
| EAI-Taxonomy Code w/ DCLM | 28.7% | 45.0% | 47.0% | + DCLM classifier |
Results show competitive code generation performance with a +46.8% improvement in computer science knowledge (MMLU-CS) compared to baseline.
🔍 Key Findings
- Code Generation: All datasets perform within statistical error on single-function generation benchmarks (HumanEval+, MBPP+)
- Code Knowledge: Clear impact on general computer science knowledge when using taxonomy-curated data
- Efficiency: Achieves strong performance without complex domain-specific curation pipelines
Dataset Schema Documentation
Overview
This dataset contains web-crawled text data with comprehensive metadata, quality signals, and taxonomic classifications. Each record represents a document extracted from web archives with detailed provenance tracking and quality assessment metrics.
Core Fields
| Field | Type | Description | Path |
|---|---|---|---|
id |
Int64 |
Unique identifier based on document hash | id |
text |
String |
The main textual content of the document | text |
EAI Taxonomy Classification
Comprehensive hierarchical classification system with primary and secondary labels - the most important feature of this dataset. The taxonomy is designed to provide detailed subject categorization, document type identification, content quality assessment, and extraction quality indicators.
Free Decimal Correspondence (FDC)
A Dewey Decimal-inspired classification system with 3-level hierarchical labels. The FDC provides nested categories where each successive level refines its parent category. It's designed to be compatible with the Dewey Decimal System for library cataloging.
Level Structure:
- Level 1: Top-level categories (0-9) covering broad subject areas like General works, Philosophy, Religion, Social Sciences, etc.
- Level 2: Sub-divisions (00-99) that refine Level 1 categories
- Level 3: Specific categories (000-999) that further refine Level 2 categories
| Component | Description | Path |
|---|---|---|
| Primary Code | Main classification code | eai_taxonomy.free_decimal_correspondence.primary.code |
| Primary Level 1 | Top-level category (0=General works, 1=Philosophy, 2=Religion, 3=Social Sciences, 4=Language, 5=Science, 6=Technology, 7=Arts, 8=Literature, 9=History/Geography) | eai_taxonomy.free_decimal_correspondence.primary.labels.level_1 |
| Primary Level 2 | Mid-level category | eai_taxonomy.free_decimal_correspondence.primary.labels.level_2 |
| Primary Level 3 | Specific category | eai_taxonomy.free_decimal_correspondence.primary.labels.level_3 |
| Secondary Code | Alternative classification code | eai_taxonomy.free_decimal_correspondence.secondary.code |
| Secondary Level 1 | Alternative top-level category | eai_taxonomy.free_decimal_correspondence.secondary.labels.level_1 |
| Secondary Level 2 | Alternative mid-level category | eai_taxonomy.free_decimal_correspondence.secondary.labels.level_2 |
| Secondary Level 3 | Alternative specific category | eai_taxonomy.free_decimal_correspondence.secondary.labels.level_3 |
We recommend this viewer for easily navigating the FDC categories when curating filters: https://www.librarything.com/mds
Bloom's Taxonomy Integration
Based on Anderson and Krathwohl's 2001 revision of Bloom's Taxonomy of Educational Objectives, providing two complementary categorization dimensions for educational content analysis.
Knowledge Domain
Categorizes the type of knowledge demonstrated in the document:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main knowledge domain code | eai_taxonomy.bloom_knowledge_domain.primary.code |
| Primary Label | Main knowledge domain label | eai_taxonomy.bloom_knowledge_domain.primary.label |
| Secondary Code | Alternative knowledge domain code | eai_taxonomy.bloom_knowledge_domain.secondary.code |
| Secondary Label | Alternative knowledge domain label | eai_taxonomy.bloom_knowledge_domain.secondary.label |
Possible Values:
| Code | Label | Description |
|---|---|---|
-1 |
Abstain | Unable to determine |
1 |
Factual | Basic elements to learn or solve problems |
2 |
Conceptual | Interrelationships between basic elements within larger context |
3 |
Procedural | Methods and techniques in the discipline |
4 |
Metacognitive | Awareness of how learning works in relation to oneself |
Cognitive Processing Level
Assesses the learning and thinking skill levels demonstrated by the document author:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main cognitive process code | eai_taxonomy.bloom_cognitive_process.primary.code |
| Primary Label | Main cognitive process label | eai_taxonomy.bloom_cognitive_process.primary.label |
| Secondary Code | Alternative cognitive process code | eai_taxonomy.bloom_cognitive_process.secondary.code |
| Secondary Label | Alternative cognitive process label | eai_taxonomy.bloom_cognitive_process.secondary.label |
Possible Values:
| Code | Label | Description |
|---|---|---|
-1 |
Abstain | Unable to determine |
1 |
Remember | Retrieve relevant knowledge from memory |
2 |
Understand | Determine meaning of instructional messages |
3 |
Apply | Use a procedure in a given situation |
4 |
Analyze | Break materials into components and determine relationships |
5 |
Evaluate | Make judgments based on criteria and standards |
6 |
Create | Create new or original work |
Document Characteristics
Document Type v1
In-house classification of common web document types and formats:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main document type code | eai_taxonomy.document_type_v1.primary.code |
| Primary Label | Main document type label | eai_taxonomy.document_type_v1.primary.label |
| Secondary Code | Alternative document type code | eai_taxonomy.document_type_v1.secondary.code |
| Secondary Label | Alternative document type label | eai_taxonomy.document_type_v1.secondary.label |
Possible Values:
| Code | Label | Examples |
|---|---|---|
-1 |
Abstain | Unable to classify |
1 |
News/Editorial | CNN articles, opinion columns |
2 |
Academic/Research | ArXiv papers, research articles |
3 |
Reference/Encyclopedic/Educational | FAQs, Wikipedia entries |
4 |
Code/Software | GitHub repos, code examples |
5 |
Social/Forum | Conversation threads, Q&A boards |
6 |
Promotional/Advertisement | Product pages, calls to action |
7 |
Search/Directory/Bibliography | Link pages, search results |
8 |
Adult/Pornographic | Adult content |
9 |
Personal/Misc | Blogs, user profiles |
10 |
Machine-Generated | Lorem ipsum, garbled text |
11 |
Legal/Regulatory | Contracts, terms of service |
12 |
Government/Political | Legislation, press releases |
13 |
Literary/Creative | Poems, short stories |
14 |
Reviews/Critiques | Film critiques, product reviews |
15 |
E-Commerce/Marketplace | eBay listings, Amazon pages |
16 |
Images/Videos/Audio | YouTube videos, Imgur pages |
17 |
Other/Unclassified | Documents that resist classification |
Document Type v2
Updated classification based on WebOrganizer taxonomy with refined categories for improved document classification accuracy:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main document type code (v2) | eai_taxonomy.document_type_v2.primary.code |
| Primary Label | Main document type label (v2) | eai_taxonomy.document_type_v2.primary.label |
| Secondary Code | Alternative document type code (v2) | eai_taxonomy.document_type_v2.secondary.code |
| Secondary Label | Alternative document type label (v2) | eai_taxonomy.document_type_v2.secondary.label |
Complete Value Mapping:
| Code | Label | Examples |
|---|---|---|
-1 |
Abstain | Documents requiring human review |
1 |
About (Org.) | Company about pages, mission statements |
2 |
About (Personal) | Personal bios, LinkedIn profiles |
3 |
Academic Writing | Research papers, abstracts, dissertations |
4 |
Audio Transcript | Interview transcripts, court records, captions |
5 |
Comment Section | Reddit threads, blog comments |
6 |
Content Listing | Site maps, product catalogs, directory listings |
7 |
Creative Writing | Song lyrics, novel excerpts, poetry |
8 |
Documentation | API docs, README files, user manuals |
9 |
FAQ | FAQ pages, Q&A lists |
10 |
Knowledge Article | Wikipedia articles, Britannica entries |
11 |
Legal Notices | Privacy policies, license agreements, terms of service |
12 |
Listicle | Buzzfeed-style articles, "Top 10" lists |
13 |
News (Org.) | Government blog posts, corporate announcements |
14 |
News Article | Newspaper articles, CNN content, breaking news |
15 |
Nonfiction Writing | Editorials, obituaries, memoirs, opinion pieces |
16 |
Personal Blog | Personal journals, diary entries, lifestyle blogs |
17 |
Product Page | Product descriptions, course offerings, sales pages |
18 |
Q&A Forum | Quora posts, Stack Exchange discussions |
19 |
Spam / Ads | SEO keyword stuffing, promotional spam |
20 |
Structured Data | Datasheets, glossaries, JSON files, databases |
21 |
Customer Support | Help articles, troubleshooting guides |
22 |
Truncated | Paywalled sites, image galleries, partial content |
23 |
Tutorial | Cooking recipes, WikiHow pages, step-by-step guides |
24 |
User Review | Yelp reviews, TripAdvisor feedback, product reviews |
25 |
Other/Unclassified | Miscellaneous documents not fitting other categories |
Extraction Artifacts
Assessment of technical extraction quality, identifying issues from HTML-to-text conversion:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main extraction artifact code | eai_taxonomy.extraction_artifacts.primary.code |
| Primary Label | Main extraction artifact label | eai_taxonomy.extraction_artifacts.primary.label |
| Secondary Code | Alternative extraction artifact code | eai_taxonomy.extraction_artifacts.secondary.code |
| Secondary Label | Alternative extraction artifact label | eai_taxonomy.extraction_artifacts.secondary.label |
Possible Values:
| Code | Label | Description |
|---|---|---|
-1 |
Abstain | Unable to determine |
0 |
No Artifacts | Clean text with no leftover HTML or irrelevant elements |
1 |
Leftover HTML | HTML/code artifacts remaining after extraction |
2 |
Text Extraction Errors | Broken math expressions, encoding errors, improperly parsed tables |
3 |
Irrelevant Content | Headers, footers, nav menus extracted by mistake |
4 |
Indeterminate | Insufficient content to judge |
Missing Content
Assessment of content completeness and extraction success:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main missing content code | eai_taxonomy.missing_content.primary.code |
| Primary Label | Main missing content label | eai_taxonomy.missing_content.primary.label |
| Secondary Code | Alternative missing content code | eai_taxonomy.missing_content.secondary.code |
| Secondary Label | Alternative missing content label | eai_taxonomy.missing_content.secondary.label |
Possible Values:
| Code | Label | Description |
|---|---|---|
-1 |
Abstain | Unable to determine |
0 |
No Missing Content | Complete and coherent text |
1 |
Truncated Snippets | Obvious "...", incomplete paragraphs, cut-off text |
2 |
Click Here References | "Download here", "Click here" without linked content |
3 |
Incoherent Flow | Unreadable or illogical flow due to missing context |
4 |
Missing Images or Figures | Placeholders or references to missing visual content |
5 |
Missing Referenced Data | References to absent tables/datasets (e.g., "See Table 3") |
6 |
Indeterminate | Insufficient content to judge |
Text Structure Information
| Field | Type | Description | Path |
|---|---|---|---|
| Line Start Indices | List[Int32] |
Starting indices of each line | line_start_n_end_idx.line_start_idx |
| Line End Indices | List[Int32] |
Ending indices of each line | line_start_n_end_idx.line_end_idx |
Content Quality Dimensions
Quality assessment inspired by NaturalReasoning and FineWeb efforts to categorize web data by information sophistication.
Reasoning Depth
Assesses the complexity and sophistication of logical reasoning in the document:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main reasoning depth code | eai_taxonomy.reasoning_depth.primary.code |
| Primary Label | Main reasoning depth label | eai_taxonomy.reasoning_depth.primary.label |
| Secondary Code | Alternative reasoning depth code | eai_taxonomy.reasoning_depth.secondary.code |
| Secondary Label | Alternative reasoning depth label | eai_taxonomy.reasoning_depth.secondary.label |
Possible Values:
| Code | Label | Description |
|---|---|---|
-1 |
Abstain | Unable to determine |
1 |
No Reasoning | Facts present but no evidence of reasoning |
2 |
Basic Reasoning | Basic analysis with minimal explanation and summarization |
3 |
Intermediate Reasoning | Some logical steps connecting ideas and structured thinking |
4 |
Advanced Reasoning | Multi-step reasoning and thorough analysis with well-developed explanations |
5 |
Exceptional Reasoning | Novel abstractions, theoretical frameworks, long chain-of-thought, original insights, or proofs |
6 |
Indeterminate | Insufficient context to judge |
Technical Correctness
Evaluates the accuracy and precision of technical information:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main technical correctness code | eai_taxonomy.technical_correctness.primary.code |
| Primary Label | Main technical correctness label | eai_taxonomy.technical_correctness.primary.label |
| Secondary Code | Alternative technical correctness code | eai_taxonomy.technical_correctness.secondary.code |
| Secondary Label | Alternative technical correctness label | eai_taxonomy.technical_correctness.secondary.label |
Possible Values:
| Code | Label | Description |
|---|---|---|
-1 |
Abstain | Unable to determine |
1 |
Technically Flawed | Significant errors undermining content validity |
2 |
Partially Correct | Some correctness but contains flaws, omissions, or errors |
3 |
Mostly Correct | Technical correctness with minor flaws or incomplete explanations |
4 |
Highly Correct | High technical correctness with precise definitions and clear explanations |
5 |
Exceptionally Correct | Exceptional technical correctness with formal proofs and flawless content |
6 |
Not Applicable/Indeterminate | No technical content or insufficient context |
Education Level
Assesses the appropriate educational background required to comprehend the content:
| Component | Description | Path |
|---|---|---|
| Primary Code | Main education level code | eai_taxonomy.education_level.primary.code |
| Primary Label | Main education level label | eai_taxonomy.education_level.primary.label |
| Secondary Code | Alternative education level code | eai_taxonomy.education_level.secondary.code |
| Secondary Label | Alternative education level label | eai_taxonomy.education_level.secondary.label |
Possible Values:
| Code | Label | Description |
|---|---|---|
-1 |
Abstain | Unable to determine |
1 |
General Audience | Accessible to anyone with basic literacy; simple terms |
2 |
High School Level | Requires high school education; specialized terminology explained for non-experts |
3 |
Undergraduate Level | Requires college education; uses specialized terminology and assumes background knowledge |
4 |
Graduate/Expert Level | Requires graduate education or domain expertise; assumes deep background knowledge |
5 |
Indeterminate | Insufficient content to judge educational level |
Metadata
Metadata Structure
The metadata field contains a nested structure with web archive information:
| Field | Type | Description | Path |
|---|---|---|---|
| URL Information | |||
| URL | String |
Original URL of the document | metadata.url |
| Source Domain | String |
Domain name of the source | metadata.source_domain |
| Snapshot ID | String |
Identifier for the web archive snapshot | metadata.snapshot_id |
| WARC Metadata | WARC (Web ARChive) format metadata | ||
| Content Length | String |
Size of the content | metadata.warc_metadata.Content-Length |
| Content Type | String |
MIME type of the content | metadata.warc_metadata.Content-Type |
| Block Digest | String |
Checksum of the WARC block | metadata.warc_metadata.WARC-Block-Digest |
| Concurrent To | String |
Related WARC records | metadata.warc_metadata.WARC-Concurrent-To |
| Date | String |
Timestamp of the crawl | metadata.warc_metadata.WARC-Date |
| IP Address | String |
Source server IP address | metadata.warc_metadata.WARC-IP-Address |
| Payload Type | String |
Identified content type | metadata.warc_metadata.WARC-Identified-Payload-Type |
| Payload Digest | String |
Checksum of the payload | metadata.warc_metadata.WARC-Payload-Digest |
| Record ID | String |
Unique WARC record identifier | metadata.warc_metadata.WARC-Record-ID |
| Target URI | String |
Original target URL | metadata.warc_metadata.WARC-Target-URI |
| Truncated | String |
Truncation status | metadata.warc_metadata.WARC-Truncated |
| Type | String |
WARC record type | metadata.warc_metadata.WARC-Type |
| Warcinfo ID | String |
Associated warcinfo record | metadata.warc_metadata.WARC-Warcinfo-ID |
| Additional Info | |||
| WARC Info | String |
Additional WARC information | metadata.warc_info |
Quality Signals
The dataset includes two comprehensive quality assessment frameworks:
Red Pajama v2 Quality Metrics
Text quality indicators derived from the Red Pajama v2 filtering pipeline:
Content Structure Metrics
| Metric | Description | Path |
|---|---|---|
| Original Length | Original document length | quality_signals.red_pajama_v2.ccnet_original_length |
| Original Lines | Number of lines in original document | quality_signals.red_pajama_v2.ccnet_original_nlines |
| Sentence Count | Total sentence count | quality_signals.red_pajama_v2.rps_doc_num_sentences |
| Word Count | Total word count | quality_signals.red_pajama_v2.rps_doc_word_count |
| Mean Word Length | Average word length | quality_signals.red_pajama_v2.rps_doc_mean_word_length |
Language Quality Metrics
| Metric | Description | Path |
|---|---|---|
| Stop Word Fraction | Proportion of stop words | quality_signals.red_pajama_v2.rps_doc_stop_word_fraction |
| Unique Words Fraction | Fraction of unique words | quality_signals.red_pajama_v2.rps_doc_frac_unique_words |
| All Caps Words | Fraction of words in all capitals | quality_signals.red_pajama_v2.rps_doc_frac_all_caps_words |
| Non-Alphabetic Words | Fraction of non-alphabetic words | quality_signals.red_pajama_v2.rps_doc_frac_no_alph_words |
| Unigram Entropy | Entropy measure of word distribution | quality_signals.red_pajama_v2.rps_doc_unigram_entropy |
Content Pattern Analysis
| Metric | Description | Path |
|---|---|---|
| Curly Bracket Density | Curly bracket density (code indicator) | quality_signals.red_pajama_v2.rps_doc_curly_bracket |
| Symbol-to-Word Ratio | Symbol-to-word ratio | quality_signals.red_pajama_v2.rps_doc_symbol_to_word_ratio |
| Ellipsis Line Endings | Lines ending with ellipsis | quality_signals.red_pajama_v2.rps_doc_frac_lines_end_with_ellipsis |
| Lorem Ipsum Detection | Lorem ipsum text detection | quality_signals.red_pajama_v2.rps_doc_lorem_ipsum |
| Offensive Content | Potentially offensive content detection | quality_signals.red_pajama_v2.rps_doc_ldnoobw_words |
| UT1 Blacklist | UT1 blacklist filtering score | quality_signals.red_pajama_v2.rps_doc_ut1_blacklist |
Duplication Detection
| Metric | Description | Path |
|---|---|---|
| 5-gram Duplication | Character-level duplication for 5-grams | quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_5grams |
| 6-gram Duplication | Character-level duplication for 6-grams | quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_6grams |
| 7-gram Duplication | Character-level duplication for 7-grams | quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_7grams |
| 8-gram Duplication | Character-level duplication for 8-grams | quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_8grams |
| 9-gram Duplication | Character-level duplication for 9-grams | quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_9grams |
| 10-gram Duplication | Character-level duplication for 10-grams | quality_signals.red_pajama_v2.rps_doc_frac_chars_dupe_10grams |
| Top 2-gram Coverage | Most frequent 2-gram coverage | quality_signals.red_pajama_v2.rps_doc_frac_chars_top_2gram |
| Top 3-gram Coverage | Most frequent 3-gram coverage | quality_signals.red_pajama_v2.rps_doc_frac_chars_top_3gram |
| Top 4-gram Coverage | Most frequent 4-gram coverage | quality_signals.red_pajama_v2.rps_doc_frac_chars_top_4gram |
Domain Importance Scores
| Metric | Description | Path |
|---|---|---|
| Books Importance | Similarity to book content | quality_signals.red_pajama_v2.rps_doc_books_importance |
| Books Importance (Length Corrected) | Length-corrected books similarity | quality_signals.red_pajama_v2.rps_doc_books_importance_length_correction |
| OpenWebText Importance | Similarity to OpenWebText | quality_signals.red_pajama_v2.rps_doc_openwebtext_importance |
| OpenWebText Importance (Length Corrected) | Length-corrected OpenWebText similarity | quality_signals.red_pajama_v2.rps_doc_openwebtext_importance_length_correction |
| Wikipedia Importance | Similarity to Wikipedia | quality_signals.red_pajama_v2.rps_doc_wikipedia_importance |
| Wikipedia Importance (Length Corrected) | Length-corrected Wikipedia similarity | quality_signals.red_pajama_v2.rps_doc_wikipedia_importance_length_correction |
FastText Classification Scores
Domain and content type classification probabilities:
| Metric | Description | Path |
|---|---|---|
| DCLM Score | DataComp-LM classifier score | quality_signals.fasttext.dclm |
| English Confidence | English language confidence | quality_signals.fasttext.english |
| Educational Content | Educational content approximation | quality_signals.fasttext.fineweb_edu_approx |
| General Math | General mathematics content | quality_signals.fasttext.eai_general_math |
| Web Math | OWM Web-based mathematics content | quality_signals.fasttext.eai_open_web_math |
| Code Content | Code content detection | quality_signals.fasttext.eai_web_code |
How to Load the Dataset
This section provides examples of how to load the EssentialAI/eai-taxonomy-code-w-dclm-100b-sample dataset using different Python libraries and frameworks.
Using Hugging Face Datasets (Standard Method)
The simplest way to load the dataset is using the Hugging Face datasets library:
from datasets import load_dataset
# Load the entire dataset
dataset = load_dataset("EssentialAI/eai-taxonomy-code-w-dclm-100b-sample")
# View dataset structure
print(dataset)
print(f"Number of examples: {len(dataset['train'])}")
You can also load the dataset in streaming mode to avoid downloading the entire dataset at once:
from datasets import load_dataset
# Load in streaming mode
dataset = load_dataset("EssentialAI/eai-taxonomy-code-w-dclm-100b-sample", streaming=True)
data_stream = dataset["train"]
# Iterate through examples
for example in data_stream.take(5):
print(example)
Using PySpark
For large-scale distributed processing, you can load the dataset using PySpark with the pyspark_huggingface library:
# First install the required library:
# pip install pyspark_huggingface
import pyspark_huggingface
from pyspark.sql import SparkSession
# Initialize Spark session
spark = SparkSession.builder.appName("EAI-Taxonomy-Code-w-DCLM").getOrCreate()
# Load the dataset using the "huggingface" data source
df = spark.read.format("huggingface").load("EssentialAI/eai-taxonomy-code-w-dclm-100b-sample")
# Basic dataset exploration
print(f"Dataset shape: {df.count()} rows, {len(df.columns)} columns")
df.show(10)
df.printSchema()
# Load only specific columns for efficiency
df_subset = (
spark.read.format("huggingface")
.option("columns", '["column1", "column2"]') # Replace with actual column names
.load("EssentialAI/eai-taxonomy-code-w-dclm-100b-sample")
)
# Run SQL queries on the dataset
df.createOrReplaceTempView("eai_taxonomy_code_w_dclm_dataset")
result = spark.sql("""
SELECT COUNT(*) as total_examples
FROM eai_taxonomy_code_w_dclm_dataset
""")
result.show()
Using Daft
Daft provides a modern DataFrame library optimized for machine learning workloads. You can load the dataset directly from Hugging Face:
import daft
# Load the entire dataset
df = daft.read_parquet("hf://datasets/EssentialAI/eai-taxonomy-code-w-dclm-100b-sample")
# Basic exploration
print("Dataset schema:")
df.schema()
print("First 5 rows:")
df.show(5)
If you need to access private datasets or use authentication:
import daft
from daft.io import IOConfig, HTTPConfig
io_config = IOConfig(http=HTTPConfig(bearer_token="your_token"))
df = daft.read_parquet("hf://datasets/EssentialAI/eai-taxonomy-code-w-dclm-100b-sample", io_config=io_config)
Installation Requirements
Make sure you have the required libraries installed:
# For Hugging Face datasets
pip install datasets
# For PySpark with Hugging Face integration
pip install pyspark_huggingface
# For Daft
pip install daft
📜 License
Essential-Web-v1.0 contributions are made available under the ODC attribution license; however, users should also abide by the Common Crawl - Terms of Use. We do not alter the license of any of the underlying data.
📝 Citation
@misc{ai2025essentialwebv1024ttokens,
title={Essential-Web v1.0: 24T tokens of organized web data},
author={Essential AI and : and Andrew Hojel and Michael Pust and Tim Romanski and Yash Vanjani and Ritvik Kapila and Mohit Parmar and Adarsh Chaluvaraju and Alok Tripathy and Anil Thomas and Ashish Tanwer and Darsh J Shah and Ishaan Shah and Karl Stratos and Khoi Nguyen and Kurt Smith and Michael Callahan and Peter Rushton and Philip Monk and Platon Mazarakis and Saad Jamal and Saurabh Srivastava and Somanshu Singla and Ashish Vaswani},
year={2025},
eprint={2506.14111},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2506.14111},
}
- Downloads last month
- 246