Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    ArrowInvalid
Message:      Failed to parse string: 'visit repo url' as a scalar of type timestamp[s]
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1871, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 623, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2293, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2246, in cast_table_to_schema
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2247, in <listcomp>
                  cast_array_to_feature(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1796, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1796, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2014, in cast_array_to_feature
                  casted_array_values = _c(array.values, feature[0])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1798, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2006, in cast_array_to_feature
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2007, in <listcomp>
                  _c(array.field(name) if name in array_fields else null_array, subfeature)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1798, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2006, in cast_array_to_feature
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2007, in <listcomp>
                  _c(array.field(name) if name in array_fields else null_array, subfeature)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1798, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2006, in cast_array_to_feature
                  arrays = [
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2007, in <listcomp>
                  _c(array.field(name) if name in array_fields else null_array, subfeature)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1798, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2103, in cast_array_to_feature
                  return array_cast(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1798, in wrapper
                  return func(array, *args, **kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1950, in array_cast
                  return array.cast(pa_type)
                File "pyarrow/array.pxi", line 996, in pyarrow.lib.Array.cast
                File "/src/services/worker/.venv/lib/python3.9/site-packages/pyarrow/compute.py", line 404, in cast
                  return call_function("cast", [arr], options, memory_pool)
                File "pyarrow/_compute.pyx", line 590, in pyarrow._compute.call_function
                File "pyarrow/_compute.pyx", line 385, in pyarrow._compute.Function.call
                File "pyarrow/error.pxi", line 154, in pyarrow.lib.pyarrow_internal_check_status
                File "pyarrow/error.pxi", line 91, in pyarrow.lib.check_status
              pyarrow.lib.ArrowInvalid: Failed to parse string: 'visit repo url' as a scalar of type timestamp[s]
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1433, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 989, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1742, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1898, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

cve_id
string
published_date
timestamp[us]
last_modified_date
timestamp[us]
description
string
nodes
string
severity
string
obtain_all_privilege
string
obtain_user_privilege
string
obtain_other_privilege
string
user_interaction_required
string
cvss2_vector_string
string
cvss2_access_vector
string
cvss2_access_complexity
string
cvss2_authentication
string
cvss2_confidentiality_impact
string
cvss2_integrity_impact
string
cvss2_availability_impact
string
cvss2_base_score
string
cvss3_vector_string
string
cvss3_attack_vector
string
cvss3_attack_complexity
string
cvss3_privileges_required
string
cvss3_user_interaction
string
cvss3_scope
string
cvss3_confidentiality_impact
string
cvss3_integrity_impact
string
cvss3_availability_impact
string
cvss3_base_score
string
cvss3_base_severity
string
exploitability_score
string
impact_score
string
ac_insuf_info
string
reference_json
string
problemtype_json
string
cwe_info
list
fixes_info
list
CVE-2013-7283
2014-01-09T18:07:00
2014-01-10T15:07:00
[{'lang': 'en', 'value': 'Race condition in the libreswan.spec files for Red Hat Enterprise Linux (RHEL) and Fedora packages in libreswan 3.6 has unspecified impact and attack vectors, involving the /var/tmp/libreswan-nss-pwd temporary file.'}]
[{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:libreswan:libreswan:3.6:*:*:*:*:*:*:*', 'cpe_name': []}]}]
HIGH
False
False
False
False
AV:N/AC:M/Au:N/C:C/I:C/A:C
NETWORK
MEDIUM
NONE
COMPLETE
COMPLETE
COMPLETE
9.3
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
[{'url': 'https://github.com/libreswan/libreswan/commit/ef2d756e73a188401c36133c2e2f7ce4f3c6ae55', 'name': 'https://github.com/libreswan/libreswan/commit/ef2d756e73a188401c36133c2e2f7ce4f3c6ae55', 'refsource': 'CONFIRM', 'tags': ['Exploit', 'Patch']}, {'url': 'http://www.osvdb.org/101575', 'name': '101575', 'refsource': 'OSVDB', 'tags': []}, {'url': 'https://lists.libreswan.org/pipermail/swan-announce/2013/000007.html', 'name': '[Swan-announce] 20131211 Libreswan 3.7 released', 'refsource': 'MLIST', 'tags': ['Vendor Advisory']}, {'url': 'http://secunia.com/advisories/56276', 'name': '56276', 'refsource': 'SECUNIA', 'tags': ['Vendor Advisory']}]
[{'description': [{'lang': 'en', 'value': 'CWE-362'}]}]
[ { "index": 500, "cwe_id": "CWE-362", "cwe_name": "Concurrent Execution using Shared Resource with Improper Synchronization ('Race Condition')", "description": "The product contains a code sequence that can run concurrently with other code, and the code sequence requires temporary, exclusive access t...
[ { "cve_id": "CVE-2013-7283", "hash": "ef2d756e73a188401c36133c2e2f7ce4f3c6ae55", "repo_url": "https://github.com/libreswan/libreswan", "commit_details": { "hash": "ef2d756e73a188401c36133c2e2f7ce4f3c6ae55", "repo_url": "https://github.com/libreswan/libreswan", "author": "Tuomo Soin...
CVE-2022-28368
2022-04-03T03:15:00
2023-08-08T14:22:00
[{'lang': 'en', 'value': 'Dompdf 1.2.1 allows remote code execution via a .php file in the src:url field of an @font-face Cascading Style Sheets (CSS) statement (within an HTML input file).'}]
[{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:dompdf_project:dompdf:*:*:*:*:*:*:*:*', 'versionEndExcluding': '1.2.1', 'cpe_name': []}]}]
HIGH
False
False
False
False
AV:N/AC:L/Au:N/C:P/I:P/A:P
NETWORK
LOW
NONE
PARTIAL
PARTIAL
PARTIAL
7.5
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
NETWORK
LOW
NONE
NONE
UNCHANGED
HIGH
HIGH
HIGH
9.8
CRITICAL
3.9
5.9
False
[{'url': 'https://github.com/snyk-labs/php-goof', 'name': 'https://github.com/snyk-labs/php-goof', 'refsource': 'MISC', 'tags': ['Third Party Advisory']}, {'url': 'https://packagist.org/packages/dompdf/dompdf#v1.2.1', 'name': 'https://packagist.org/packages/dompdf/dompdf#v1.2.1', 'refsource': 'MISC', 'tags': ['Product', 'Third Party Advisory']}, {'url': 'https://snyk.io/blog/security-alert-php-pdf-library-dompdf-rce/', 'name': 'https://snyk.io/blog/security-alert-php-pdf-library-dompdf-rce/', 'refsource': 'MISC', 'tags': ['Exploit', 'Third Party Advisory']}, {'url': 'https://github.com/dompdf/dompdf/commit/4c70e1025bcd9b7694b95dd552499bd83cd6141d', 'name': 'https://github.com/dompdf/dompdf/commit/4c70e1025bcd9b7694b95dd552499bd83cd6141d', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/dompdf/dompdf/pull/2808', 'name': 'https://github.com/dompdf/dompdf/pull/2808', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'https://github.com/dompdf/dompdf/issues/2598', 'name': 'https://github.com/dompdf/dompdf/issues/2598', 'refsource': 'MISC', 'tags': ['Patch', 'Third Party Advisory']}, {'url': 'http://packetstormsecurity.com/files/171738/Dompdf-1.2.1-Remote-Code-Execution.html', 'name': 'http://packetstormsecurity.com/files/171738/Dompdf-1.2.1-Remote-Code-Execution.html', 'refsource': 'MISC', 'tags': []}]
[{'description': [{'lang': 'en', 'value': 'CWE-79'}]}]
[ { "index": 878, "cwe_id": "CWE-79", "cwe_name": "Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting')", "description": "The product does not neutralize or incorrectly neutralizes user-controllable input before it is placed in output that is used as a web page that is ...
[ { "cve_id": "CVE-2022-28368", "hash": "4c70e1025bcd9b7694b95dd552499bd83cd6141d", "repo_url": "https://github.com/dompdf/dompdf", "commit_details": { "hash": "4c70e1025bcd9b7694b95dd552499bd83cd6141d", "repo_url": "https://github.com/dompdf/dompdf", "author": "Brian Sweeney", ...
CVE-2017-9203
2017-05-23T04:29:00
2019-10-03T00:03:00
"[{'lang': 'en', 'value': 'imagew-main.c:960:12 in libimageworsener.a in ImageWorsener 1.3.1 allows (...TRUNCATED)
"[{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:entro(...TRUNCATED)
MEDIUM
False
False
False
True
AV:N/AC:M/Au:N/C:N/I:N/A:P
NETWORK
MEDIUM
NONE
NONE
NONE
PARTIAL
4.3
CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:H
NETWORK
LOW
NONE
REQUIRED
UNCHANGED
NONE
NONE
HIGH
6.5
MEDIUM
2.8
3.6
nan
"[{'url': 'https://github.com/jsummers/imageworsener/commit/a4f247707f08e322f0b41e82c3e06e224240a654(...TRUNCATED)
[{'description': [{'lang': 'en', 'value': 'CWE-787'}]}]
[{"index":875,"cwe_id":"CWE-787","cwe_name":"Out-of-bounds Write","description":"The product writes (...TRUNCATED)
[{"cve_id":"CVE-2017-9203","hash":"a4f247707f08e322f0b41e82c3e06e224240a654","repo_url":"https://git(...TRUNCATED)
CVE-2022-24810
2024-04-16T20:15:00
2024-04-17T12:48:00
"[{'lang': 'en', 'value': 'net-snmp provides various tools relating to the Simple Network Management(...TRUNCATED)
[]
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
"[{'url': 'https://github.com/net-snmp/net-snmp/commit/ce66eb97c17aa9a48bc079be7b65895266fa6775', 'n(...TRUNCATED)
[{'description': []}]
[{"index":1373,"cwe_id":"NVD-CWE-noinfo","cwe_name":"Insufficient Information","description":"There (...TRUNCATED)
[{"cve_id":"CVE-2022-24810","hash":"ce66eb97c17aa9a48bc079be7b65895266fa6775","repo_url":"https://gi(...TRUNCATED)
CVE-2018-1999015
2018-07-23T15:29:00
2018-09-20T16:22:00
"[{'lang': 'en', 'value': 'FFmpeg before commit 5aba5b89d0b1d73164d3b81764828bb8b20ff32a contains an(...TRUNCATED)
"[{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:ffmpe(...TRUNCATED)
MEDIUM
False
False
False
True
AV:N/AC:M/Au:N/C:P/I:N/A:N
NETWORK
MEDIUM
NONE
PARTIAL
NONE
NONE
4.3
CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:N/A:N
NETWORK
LOW
NONE
REQUIRED
UNCHANGED
HIGH
NONE
NONE
6.5
MEDIUM
2.8
3.6
nan
"[{'url': 'https://github.com/FFmpeg/FFmpeg/commit/5aba5b89d0b1d73164d3b81764828bb8b20ff32a', 'name'(...TRUNCATED)
[{'description': [{'lang': 'en', 'value': 'CWE-125'}]}]
[{"index":159,"cwe_id":"CWE-125","cwe_name":"Out-of-bounds Read","description":"The product reads da(...TRUNCATED)
[{"cve_id":"CVE-2018-1999015","hash":"5aba5b89d0b1d73164d3b81764828bb8b20ff32a","repo_url":"https://(...TRUNCATED)
CVE-2018-13300
2018-07-05T17:29:00
2021-01-04T18:15:00
"[{'lang': 'en', 'value': 'In FFmpeg 3.2 and 4.0.1, an improper argument (AVCodecParameters) passed (...TRUNCATED)
"[{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:ffmpe(...TRUNCATED)
MEDIUM
False
False
False
True
AV:N/AC:M/Au:N/C:P/I:N/A:P
NETWORK
MEDIUM
NONE
PARTIAL
NONE
PARTIAL
5.8
CVSS:3.0/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:N/A:H
NETWORK
LOW
NONE
REQUIRED
UNCHANGED
HIGH
NONE
HIGH
8.1
HIGH
2.8
5.2
False
"[{'url': 'https://github.com/FFmpeg/FFmpeg/commit/95556e27e2c1d56d9e18f5db34d6f756f3011148', 'name'(...TRUNCATED)
[{'description': [{'lang': 'en', 'value': 'CWE-125'}]}]
[{"index":159,"cwe_id":"CWE-125","cwe_name":"Out-of-bounds Read","description":"The product reads da(...TRUNCATED)
[{"cve_id":"CVE-2018-13300","hash":"95556e27e2c1d56d9e18f5db34d6f756f3011148","repo_url":"https://gi(...TRUNCATED)
CVE-2022-3327
2022-10-20T00:15:00
2022-10-24T12:58:00
"[{'lang': 'en', 'value': 'Missing Authentication for Critical Function in GitHub repository ikus060(...TRUNCATED)
"[{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:ikus-(...TRUNCATED)
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
NETWORK
LOW
NONE
NONE
UNCHANGED
HIGH
HIGH
HIGH
9.8
CRITICAL
3.9
5.9
nan
"[{'url': 'https://github.com/ikus060/rdiffweb/commit/f2a32f2a9f3fb8be1a9432ac3d81d3aacdb13095', 'na(...TRUNCATED)
[{'description': [{'lang': 'en', 'value': 'CWE-306'}]}]
[{"index":442,"cwe_id":"CWE-306","cwe_name":"Missing Authentication for Critical Function","descript(...TRUNCATED)
[{"cve_id":"CVE-2022-3327","hash":"f2a32f2a9f3fb8be1a9432ac3d81d3aacdb13095","repo_url":"https://git(...TRUNCATED)
CVE-2014-3535
2014-09-28T19:55:00
2023-02-13T00:40:00
"[{'lang': 'en', 'value': 'include/linux/netdevice.h in the Linux kernel before 2.6.36 incorrectly u(...TRUNCATED)
"[{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:o:linux(...TRUNCATED)
HIGH
False
False
False
False
AV:N/AC:L/Au:N/C:N/I:N/A:C
NETWORK
LOW
NONE
NONE
NONE
COMPLETE
7.8
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
False
"[{'url': 'https://github.com/torvalds/linux/commit/256df2f3879efdb2e9808bdb1b54b16fbb11fa38', 'name(...TRUNCATED)
[{'description': [{'lang': 'en', 'value': 'CWE-119'}]}]
[{"index":122,"cwe_id":"CWE-119","cwe_name":"Improper Restriction of Operations within the Bounds of(...TRUNCATED)
[{"cve_id":"CVE-2014-3535","hash":"256df2f3879efdb2e9808bdb1b54b16fbb11fa38","repo_url":"https://git(...TRUNCATED)
CVE-2019-16941
2019-09-28T16:15:00
2019-10-04T21:15:00
"[{'lang': 'en', 'value': 'NSA Ghidra through 9.0.4, when experimental mode is enabled, allows arbit(...TRUNCATED)
"[{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:nsa:g(...TRUNCATED)
MEDIUM
False
False
False
False
AV:N/AC:M/Au:N/C:P/I:P/A:P
NETWORK
MEDIUM
NONE
PARTIAL
PARTIAL
PARTIAL
6.8
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
NETWORK
LOW
NONE
NONE
UNCHANGED
HIGH
HIGH
HIGH
9.8
CRITICAL
3.9
5.9
False
"[{'url': 'https://github.com/NationalSecurityAgency/ghidra/issues/1090', 'name': 'https://github.co(...TRUNCATED)
[{'description': [{'lang': 'en', 'value': 'CWE-91'}]}]
[{"index":933,"cwe_id":"CWE-91","cwe_name":"XML Injection (aka Blind XPath Injection)","description"(...TRUNCATED)
[{"cve_id":"CVE-2019-16941","hash":"a17728f8c12effa171b17a25ccfb7e7d9528c5d0","repo_url":"https://gi(...TRUNCATED)
CVE-2023-28081
2023-05-18T22:15:00
2023-11-07T04:10:00
"[{'lang': 'en', 'value': 'A bytecode optimization bug in Hermes prior to commit e6ed9c1a4b02dc219de(...TRUNCATED)
"[{'operator': 'OR', 'children': [], 'cpe_match': [{'vulnerable': True, 'cpe23Uri': 'cpe:2.3:a:faceb(...TRUNCATED)
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
nan
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H
NETWORK
LOW
NONE
NONE
UNCHANGED
HIGH
HIGH
HIGH
9.8
CRITICAL
3.9
5.9
nan
"[{'url': 'https://github.com/facebook/hermes/commit/e6ed9c1a4b02dc219de1648f44cd808a56171b81', 'nam(...TRUNCATED)
[{'description': [{'lang': 'en', 'value': 'CWE-416'}]}]
[{"index":549,"cwe_id":"CWE-416","cwe_name":"Use After Free","description":"The product reuses or re(...TRUNCATED)
[{"cve_id":"CVE-2023-28081","hash":"e6ed9c1a4b02dc219de1648f44cd808a56171b81","repo_url":"https://gi(...TRUNCATED)
End of preview.

CVEfixes Data Splits README

This repository contains data splits derived from the CVEfixes_v1.0.8 dataset, an automated collection of vulnerabilities and their fixes from open-source software. The dataset has been processed and split into training, validation, and test sets to facilitate machine learning and vulnerability analysis tasks. Below, you’ll find details about the splits, problematic CVEs excluded due to memory constraints, and a comprehensive guide on how to recreate these splits yourself.

Dataset Overview

The original CVEfixes_v1.0.8 dataset was sourced from the Github repository https://github.com/secureIT-project/CVEfixes. We’ve split it into four parts:

  • Training Split (Part 1): 4000 CVEs (first portion of the 70% training data)
  • Training Split (Part 2): 4307 CVEs (remaining portion of the 70% training data, totaling 8307 CVEs with Part 1)
  • Validation Split: 1781 CVEs (15% of the dataset)
  • Test Split: 1781 CVEs (15% of the dataset)

These splits include full data from all tables in the CVEfixes.db SQLite database, preserving referential integrity across tables such as cve, fixes, commits, file_change, method_change, cwe, cwe_classification, and repository.

Excluded CVEs

The following CVEs were excluded from processing due to excessive memory usage (>50GB RAM), which caused runtime crashes on standard Colab environments:

  • CVE-2021-3957
  • CVE-2024-26152
  • CVE-2016-5833
  • CVE-2023-6848

If your system has less than 50GB of RAM, we recommend skipping these CVEs during processing to avoid crashes.

How to Create Your Own Data Split

Below is a step-by-step guide to download, extract, and split the CVEfixes_v1.0.8 dataset into training, validation, and test sets, mirroring the process used to create these splits. This includes Python code snippets ready to run in a Google Colab environment.

Step 1: Download the Original ZIP File

Download the dataset from Hugging Face using the huggingface_hub library.

from huggingface_hub import snapshot_download
repo_id = "starsofchance/CVEfixes_v1.0.8"
filename = "CVEfixes_v1.0.8.zip"
dataset_path = snapshot_download(
    repo_id=repo_id,
    repo_type="dataset",
    allow_patterns=filename  # Only download the zip file and not the splits we created
)
print(f"Dataset downloaded to: {dataset_path}")

After downloading the file you will see a massage:Dataset containing CVEfixes_v1.0.8.zip downloaded to: /addres you must copy/

Step 2: Create a Folder to Extract the Data

Set up a directory to extract the contents of the ZIP file.

import os
extract_dir = "/content/extracted_data"
os.makedirs(extract_dir, exist_ok=True)
print(f"Extraction directory created at: {extract_dir}")

Step 3: Decompress and Convert to SQLite Database

Extract the .sql.gz file from the ZIP and convert it into a SQLite database.

cache_path = "addres you have copied"
zip_file_path = os.path.join(cache_path, "CVEfixes_v1.0.8.zip")
!unzip -q "{zip_file_path}" -d "{extract_dir}"
#Verify extraction
print("\nExtracted files:")
!ls -lh "{extract_dir}"

Decompress the .gz file and convert to SQLite

!zcat {extract_dir}/CVEfixes_v1.0.8/Data/CVEfixes_v1.0.8.sql.gz | sqlite3 /content/CVEfixes.db
print("Database created at: /content/CVEfixes.db")

Step 4: Explore Tables and Relationships

Connect to the database and inspect its structure.

import sqlite3
import pandas as pd

# Connect to the database
conn = sqlite3.connect('/content/CVEfixes.db')
cursor = conn.cursor()

# Get all tables
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
tables = cursor.fetchall()
print("Tables in the database:", tables)

# Display column headers for each table
for table in tables:
    table_name = table[0]
    print(f"\nHeaders for table '{table_name}':")
    cursor.execute(f"PRAGMA table_info('{table_name}');")
    columns = cursor.fetchall()
    column_names = [col[1] for col in columns]
    print(f"Columns: {column_names}")

# Count rows in each table
for table in tables:
    table_name = table[0]
    cursor.execute(f"SELECT COUNT(*) FROM {table_name}")
    row_count = cursor.fetchone()[0]
    print(f"Table: {table_name}, Rows: {row_count}")

conn.close()

Expected Output:

Tables in the database: [('fixes',), ('commits',), ('file_change',), ('method_change',), ('cve',), ('cwe',), ('cwe_classification',), ('repository',)]

Headers for table 'fixes':
Columns: ['cve_id', 'hash', 'repo_url']

Headers for table 'commits':
Columns: ['hash', 'repo_url', 'author', 'author_date', 'author_timezone', 'committer', 'committer_date', 'committer_timezone', 'msg', 'merge', 'parents', 'num_lines_added', 'num_lines_deleted', 'dmm_unit_complexity', 'dmm_unit_interfacing', 'dmm_unit_size']

[... truncated for brevity ...]

Table: fixes, Rows: 12923
Table: commits, Rows: 12107
Table: file_change, Rows: 51342
Table: method_change, Rows: 277948
Table: cve, Rows: 11873
Table: cwe, Rows: 272
Table: cwe_classification, Rows: 12198
Table: repository, Rows: 4249

Step 5: Retrieve All Distinct CVE IDs

Extract unique CVE IDs from the cve table, which serves as the anchor for the dataset.

import sqlite3

conn = sqlite3.connect('/content/CVEfixes.db')
cursor = conn.cursor()

cursor.execute("SELECT DISTINCT cve_id FROM cve;")
cve_ids = [row[0] for row in cursor.fetchall()]
print(f"Total CVEs found: {len(cve_ids)}")

conn.close()

Step 6: Split the CVE IDs

Randomly shuffle and split the CVE IDs into training (70%), validation (15%), and test (15%) sets.

import random
import json

# Shuffle and split the dataset
random.shuffle(cve_ids)
n = len(cve_ids)
train_split = cve_ids[:int(0.70 * n)]  # 70% for training
val_split = cve_ids[int(0.70 * n):int(0.85 * n)]  # 15% for validation
test_split = cve_ids[int(0.85 * n):]  # 15% for test

# Save the splits to JSON files
with open('/content/train_split.json', 'w') as f:
    json.dump(train_split, f)
with open('/content/val_split.json', 'w') as f:
    json.dump(val_split, f)
with open('/content/test_split.json', 'w') as f:
    json.dump(test_split, f)

# Print split sizes
print("Train count:", len(train_split))
print("Validation count:", len(val_split))
print("Test count:", len(test_split))

Expected Output:

Total CVEs found: 11873
Train count: 8311
Validation count: 1781
Test count: 1781

Step 7: Process CVEs into JSONL Files

Define a function to bundle data for each CVE across all tables and write it to JSONL files. Below is an example script to process the training split, skipping problematic CVEs. You can adapt it for validation and test splits by changing the input and output files.

import sqlite3
import json
import time
import gc
import os

def dict_factory(cursor, row):
    if cursor.description is None or row is None:
        return None
    return {col[0]: row[idx] for idx, col in enumerate(cursor.description)}

def get_cwe_data(cursor, cve_id):
    cursor.execute("""
        SELECT cwe.* FROM cwe
        JOIN cwe_classification ON cwe.cwe_id = cwe_classification.cwe_id
        WHERE cwe_classification.cve_id = ?;
    """, (cve_id,))
    return cursor.fetchall()

def get_repository_data(cursor, repo_url, repo_cache):
    if repo_url in repo_cache:
        return repo_cache[repo_url]
    cursor.execute("SELECT * FROM repository WHERE repo_url = ?;", (repo_url,))
    repo_data = cursor.fetchone()
    repo_cache[repo_url] = repo_data
    return repo_data

def get_method_changes(cursor, file_change_id):
    cursor.execute("SELECT * FROM method_change WHERE file_change_id = ?;", (file_change_id,))
    return cursor.fetchall()

def get_file_changes(cursor, commit_hash):
    cursor.execute("SELECT * FROM file_change WHERE hash = ?;", (commit_hash,))
    file_changes = []
    for fc_row in cursor.fetchall():
        file_change_data = fc_row
        if file_change_data:
            file_change_data['method_changes'] = get_method_changes(cursor, file_change_data['file_change_id'])
            file_changes.append(file_change_data)
    return file_changes

def get_commit_data(cursor, commit_hash, repo_url, repo_cache):
    cursor.execute("SELECT * FROM commits WHERE hash = ? AND repo_url = ?;", (commit_hash, repo_url))
    commit_row = cursor.fetchone()
    if not commit_row:
        return None
    commit_data = commit_row
    commit_data['repository'] = get_repository_data(cursor, repo_url, repo_cache)
    commit_data['file_changes'] = get_file_changes(cursor, commit_hash)
    return commit_data

def get_fixes_data(cursor, cve_id, repo_cache):
    cursor.execute("SELECT * FROM fixes WHERE cve_id = ?;", (cve_id,))
    fixes = []
    for fix_row in cursor.fetchall():
        fix_data = fix_row
        if fix_data:
            commit_details = get_commit_data(cursor, fix_data['hash'], fix_data['repo_url'], repo_cache)
            if commit_details:
                fix_data['commit_details'] = commit_details
                fixes.append(fix_data)
    return fixes

def process_cve(cursor, cve_id, repo_cache):
    cursor.execute("SELECT * FROM cve WHERE cve_id = ?;", (cve_id,))
    cve_row = cursor.fetchone()
    if not cve_row:
        return None
    cve_data = cve_row
    cve_data['cwe_info'] = get_cwe_data(cursor, cve_id)
    cve_data['fixes_info'] = get_fixes_data(cursor, cve_id, repo_cache)
    return cve_data

def process_split(split_name, split_file, db_path, output_file):
    print(f"--- Processing {split_name} split ---")
    conn = sqlite3.connect(db_path)
    conn.row_factory = dict_factory
    cursor = conn.cursor()
    repo_cache = {}
    
    with open(split_file, 'r') as f:
        cve_ids = json.load(f)
    
    skip_cves = ["CVE-2021-3957", "CVE-2024-26152", "CVE-2016-5833", "CVE-2023-6848"]
    with open(output_file, 'w') as outfile:
        for i, cve_id in enumerate(cve_ids):
            if cve_id in skip_cves:
                print(f"Skipping {cve_id} due to memory constraints.")
                continue
            try:
                cve_bundle = process_cve(cursor, cve_id, repo_cache)
                if cve_bundle:
                    outfile.write(json.dumps(cve_bundle) + '\n')
                if (i + 1) % 50 == 0:
                    print(f"Processed {i + 1}/{len(cve_ids)} CVEs")
                    gc.collect()
            except Exception as e:
                print(f"Error processing {cve_id}: {e}")
                continue
    
    conn.close()
    gc.collect()
    print(f"Finished processing {split_name} split. Output saved to {output_file}")

# Example usage for training split
process_split(
    split_name="train",
    split_file="/content/train_split.json",
    db_path="/content/CVEfixes.db",
    output_file="/content/train_data.jsonl"
)

Notes:

  • Replace train with val or test and adjust file paths to process other splits.
  • The script skips the problematic CVEs listed above.
  • Output is written to a .jsonl file, with one JSON object per line.

Preprocessing

The current splits (train_data_part1.jsonl, train_data_part2.jsonl, val_data.jsonl, test_data.jsonl) contain raw data from all tables. Preprocessing (e.g., feature extraction, normalization) will be addressed in subsequent steps depending on your use case.

Copyright and License

Copyright © 2021-2024 Data-Driven Software Engineering Department (dataSED), Simula Research Laboratory, Norway

This work is licensed under the Creative Commons Attribution 4.0 International License.

Reference

The original dataset is sourced from:

CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from Open-Source Software
Guru Bhandari, Amara Naseer, Leon Moonen
Simula Research Laboratory, Oslo, Norway

For more details, refer to the original publication at https://dl.acm.org/doi/10.1145/3475960.3475985.


Downloads last month
19