Lucas22300 AliMaatouk commited on
Commit
62ed201
·
verified ·
0 Parent(s):

Duplicate from AliMaatouk/Tele-Data

Browse files

Co-authored-by: Ali Maatouk <AliMaatouk@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
13
+ *.model filter=lfs diff=lfs merge=lfs -text
14
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
15
+ *.npy filter=lfs diff=lfs merge=lfs -text
16
+ *.npz filter=lfs diff=lfs merge=lfs -text
17
+ *.onnx filter=lfs diff=lfs merge=lfs -text
18
+ *.ot filter=lfs diff=lfs merge=lfs -text
19
+ *.parquet filter=lfs diff=lfs merge=lfs -text
20
+ *.pb filter=lfs diff=lfs merge=lfs -text
21
+ *.pickle filter=lfs diff=lfs merge=lfs -text
22
+ *.pkl filter=lfs diff=lfs merge=lfs -text
23
+ *.pt filter=lfs diff=lfs merge=lfs -text
24
+ *.pth filter=lfs diff=lfs merge=lfs -text
25
+ *.rar filter=lfs diff=lfs merge=lfs -text
26
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
27
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar filter=lfs diff=lfs merge=lfs -text
30
+ *.tflite filter=lfs diff=lfs merge=lfs -text
31
+ *.tgz filter=lfs diff=lfs merge=lfs -text
32
+ *.wasm filter=lfs diff=lfs merge=lfs -text
33
+ *.xz filter=lfs diff=lfs merge=lfs -text
34
+ *.zip filter=lfs diff=lfs merge=lfs -text
35
+ *.zst filter=lfs diff=lfs merge=lfs -text
36
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
37
+ # Audio files - uncompressed
38
+ *.pcm filter=lfs diff=lfs merge=lfs -text
39
+ *.sam filter=lfs diff=lfs merge=lfs -text
40
+ *.raw filter=lfs diff=lfs merge=lfs -text
41
+ # Audio files - compressed
42
+ *.aac filter=lfs diff=lfs merge=lfs -text
43
+ *.flac filter=lfs diff=lfs merge=lfs -text
44
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
45
+ *.ogg filter=lfs diff=lfs merge=lfs -text
46
+ *.wav filter=lfs diff=lfs merge=lfs -text
47
+ # Image files - uncompressed
48
+ *.bmp filter=lfs diff=lfs merge=lfs -text
49
+ *.gif filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
51
+ *.tiff filter=lfs diff=lfs merge=lfs -text
52
+ # Image files - compressed
53
+ *.jpg filter=lfs diff=lfs merge=lfs -text
54
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
55
+ *.webp filter=lfs diff=lfs merge=lfs -text
56
+ # Video files - compressed
57
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
58
+ *.webm filter=lfs diff=lfs merge=lfs -text
59
+ arxiv/arxiv.jsonl filter=lfs diff=lfs merge=lfs -text
60
+ standard/standard.jsonl filter=lfs diff=lfs merge=lfs -text
61
+ web/web.jsonl filter=lfs diff=lfs merge=lfs -text
62
+ wiki/wiki.jsonl filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ tags:
6
+ - telecom
7
+ task_categories:
8
+ - text-generation
9
+ configs:
10
+ - config_name: default
11
+ data_files:
12
+ - split: data
13
+ path: Tele-Eval.jsonl
14
+ ---
15
+
16
+ # Tele-Data
17
+
18
+ ## Dataset Summary
19
+
20
+ Tele-Data is a comprehensive dataset of telecommunications material that revolves around four categories of sources: (1) scientific papers from arXiv, (2) 3GPP standards, (3) Wikipedia articles related to telecommunications, and (4) telecommunications-related websites extracted from Common Crawl dumps.
21
+
22
+ LLM-based filtering was used to identify the relevant material from these sources, which then underwent extensive cleaning, format unification, and equation material standardization. The dataset consists of approximately 2.5 billion tokens, making it ideal for continually pretraining language models to adapt them to the telecommunications domain.
23
+
24
+
25
+ ## Dataset Structure
26
+
27
+ ### Data Fields
28
+
29
+ The data fields are as follows:
30
+
31
+ * `ID`: Provides a unique identifier for each data sample.
32
+ * `Category`: Identifies the category of the sample.
33
+ * `Content`: Includes the full text of the data sample.
34
+ * `Metadata`: Includes a JSON object, cast as a string, with information relevant to each data sample, which varies depending on the category.
35
+
36
+ ### Data Instances
37
+
38
+ An example of Tele-Data looks as follows:
39
+
40
+
41
+ ```json
42
+ {
43
+ "ID": "standard_2413",
44
+ "Category": "standard",
45
+ "Content": "3rd Generation Partnership Project; \n Technical Specification Group Core Network and Terminals;\n Interworking between the Public Land Mobile Network (PLMN)\n supporting packet based services with\n Wireless Local Area Network (WLAN) Access and\n Packet Data Networks (PDN)\n (Release 12)\n Foreword\n This Technical Specification (TS) has been produced...",
46
+ "Metadata":
47
+ "Series": "29",
48
+ "Release": "12",
49
+ "File_name": "29161-c00"
50
+ }
51
+ ```
52
+
53
+ ## Sample Code
54
+
55
+ Below, we share a code snippet on how to get quickly started with using the dataset. First, make sure to `pip install datasets`, then copy the snippet below and adapt it to your usecase.
56
+
57
+ #### Using the whole dataset
58
+
59
+ ```python
60
+ import json
61
+ from datasets import load_dataset
62
+
63
+ Tele_Data = load_dataset("AliMaatouk/Tele-Data")
64
+ data_sample = Tele_Data['train'][0]
65
+ print(f"ID: {data_sample['id']}\nCategory: {data_sample['category']} \nContent: {data_sample['content']}")
66
+ for key, value in json.loads(data_sample['metadata']).items():
67
+ print(f"{key}: {value}")
68
+ ```
69
+
70
+ #### Using a subset of the dataset
71
+
72
+ ```python
73
+ import json
74
+ from datasets import load_dataset
75
+
76
+ Tele_Data = load_dataset("AliMaatouk/Tele-Data", name="standard")
77
+ data_sample = Tele_Data['train'][0]
78
+ print(f"ID: {data_sample['id']}\nCategory: {data_sample['category']} \nContent: {data_sample['content']}")
79
+ for key, value in json.loads(data_sample['metadata']).items():
80
+ print(f"{key}: {value}")
81
+ ```
82
+
83
+ ## Citation
84
+
85
+ You can find the paper with all details about the dataset at https://arxiv.org/abs/2409.05314. Please cite it as follows:
86
+
87
+ ```
88
+ @misc{maatouk2024telellmsseriesspecializedlarge,
89
+ title={Tele-LLMs: A Series of Specialized Large Language Models for Telecommunications},
90
+ author={Ali Maatouk and Kenny Chirino Ampudia and Rex Ying and Leandros Tassiulas},
91
+ year={2024},
92
+ eprint={2409.05314},
93
+ archivePrefix={arXiv},
94
+ primaryClass={cs.IT},
95
+ url={https://arxiv.org/abs/2409.05314},
96
+ }
97
+ ```
Tele-Data.py ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ import datasets
3
+
4
+ class TeleData(datasets.GeneratorBasedBuilder):
5
+ """Tele-Data dataset with multiple subsets: arxiv, standard, web, and wiki"""
6
+
7
+ BUILDER_CONFIGS = [
8
+ datasets.BuilderConfig(name="arxiv", version=datasets.Version("1.0.0"), description="ArXiv data"),
9
+ datasets.BuilderConfig(name="standard", version=datasets.Version("1.0.0"), description="Standard data"),
10
+ datasets.BuilderConfig(name="web", version=datasets.Version("1.0.0"), description="Web data"),
11
+ datasets.BuilderConfig(name="wiki", version=datasets.Version("1.0.0"), description="Wiki data"),
12
+ datasets.BuilderConfig(name="full", version=datasets.Version("1.0.0"), description="Full dataset"),
13
+ ]
14
+
15
+ DEFAULT_CONFIG_NAME = "full"
16
+
17
+ def _info(self):
18
+ features = datasets.Features({
19
+ "id": datasets.Value("string"),
20
+ "category": datasets.Value("string"),
21
+ "content": datasets.Value("string"),
22
+ "metadata": datasets.Value("string"),
23
+ })
24
+ return datasets.DatasetInfo(features=features)
25
+
26
+ def _split_generators(self, dl_manager):
27
+ if self.config.name == "full":
28
+ urls = [f"{name}/{name}.jsonl" for name in ["arxiv", "standard", "web", "wiki"]]
29
+ else:
30
+ urls = [f"{self.config.name}/{self.config.name}.jsonl"]
31
+
32
+ data_files = dl_manager.download_and_extract(urls)
33
+
34
+ return [
35
+ datasets.SplitGenerator(
36
+ name=datasets.Split.TRAIN,
37
+ gen_kwargs={"filepaths": data_files if isinstance(data_files, list) else [data_files]},
38
+ )
39
+ ]
40
+
41
+ def _generate_examples(self, filepaths):
42
+ for filepath in filepaths:
43
+ with open(filepath, "r", encoding="utf-8") as f:
44
+ for id_, line in enumerate(f):
45
+ data = json.loads(line)
46
+ yield id_, {
47
+ "id": data["id"],
48
+ "category": data["category"],
49
+ "content": data["content"],
50
+ "metadata": json.dumps(data["metadata"]),
51
+ }
arxiv/arxiv.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1f2d03a5cc1d8b0a13964d1a47e2456cd19813beddd8711e47d2e24102a824e7
3
+ size 4309790362
standard/standard.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9560c7a62612c4724e986fc7443019d886a158a961ddc4c3e521a6650486fa9
3
+ size 350404101
web/web.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a11dede469b0f5568298a8172ca451382c130e1e8db215a5158d3681598759e
3
+ size 7266835533
wiki/wiki.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0a5bb8acd97ad37a884a845a705206980039d337e49ab3ea5c278925b2e2977e
3
+ size 129518949