Add dataset card
Browse files
README.md
ADDED
|
@@ -0,0 +1,336 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---
|
| 3 |
+
|
| 4 |
+
title: CVEfixes Data Splits
|
| 5 |
+
description: A detailed dataset split from CVEfixes_v1.0.8 for vulnerability analysis, including train, validation, and test sets.
|
| 6 |
+
author: Mohammad Taghavi
|
| 7 |
+
date: 2025-03-31
|
| 8 |
+
source: "CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from Open-Source Software"
|
| 9 |
+
tags:
|
| 10 |
+
- security
|
| 11 |
+
- vulnerabilities
|
| 12 |
+
- dataset
|
| 13 |
+
- software-security
|
| 14 |
+
license: mit
|
| 15 |
+
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
# CVEfixes Data Splits README
|
| 19 |
+
|
| 20 |
+
This repository contains data splits derived from the `CVEfixes_v1.0.8` dataset, an automated collection of vulnerabilities and their fixes from open-source software. The dataset has been processed and split into training, validation, and test sets to facilitate machine learning and vulnerability analysis tasks. Below, you’ll find details about the splits, problematic CVEs excluded due to memory constraints, and a comprehensive guide on how to recreate these splits yourself.
|
| 21 |
+
|
| 22 |
+
## Dataset Overview
|
| 23 |
+
|
| 24 |
+
The original `CVEfixes_v1.0.8` dataset was sourced from the Github repository `https://github.com/secureIT-project/CVEfixes`. We’ve split it into four parts:
|
| 25 |
+
- **Training Split (Part 1)**: 4000 CVEs (first portion of the 70% training data)
|
| 26 |
+
- **Training Split (Part 2)**: 4307 CVEs (remaining portion of the 70% training data, totaling 8307 CVEs with Part 1)
|
| 27 |
+
- **Validation Split**: 1781 CVEs (15% of the dataset)
|
| 28 |
+
- **Test Split**: 1781 CVEs (15% of the dataset)
|
| 29 |
+
|
| 30 |
+
These splits include full data from all tables in the `CVEfixes.db` SQLite database, preserving referential integrity across tables such as `cve`, `fixes`, `commits`, `file_change`, `method_change`, `cwe`, `cwe_classification`, and `repository`.
|
| 31 |
+
|
| 32 |
+
### Excluded CVEs
|
| 33 |
+
The following CVEs were excluded from processing due to excessive memory usage (>50GB RAM), which caused runtime crashes on standard Colab environments:
|
| 34 |
+
- `CVE-2021-3957`
|
| 35 |
+
- `CVE-2024-26152`
|
| 36 |
+
- `CVE-2016-5833`
|
| 37 |
+
- `CVE-2023-6848`
|
| 38 |
+
|
| 39 |
+
If your system has less than 50GB of RAM, we recommend skipping these CVEs during processing to avoid crashes.
|
| 40 |
+
|
| 41 |
+
## How to Create Your Own Data Split
|
| 42 |
+
|
| 43 |
+
Below is a step-by-step guide to download, extract, and split the `CVEfixes_v1.0.8` dataset into training, validation, and test sets, mirroring the process used to create these splits. This includes Python code snippets ready to run in a Google Colab environment.
|
| 44 |
+
|
| 45 |
+
### Step 1: Download the Original ZIP File
|
| 46 |
+
Download the dataset from Hugging Face using the `huggingface_hub` library.
|
| 47 |
+
|
| 48 |
+
```
|
| 49 |
+
from huggingface_hub import snapshot_download
|
| 50 |
+
repo_id = "starsofchance/CVEfixes_v1.0.8"
|
| 51 |
+
filename = "CVEfixes_v1.0.8.zip"
|
| 52 |
+
dataset_path = snapshot_download(
|
| 53 |
+
repo_id=repo_id,
|
| 54 |
+
repo_type="dataset",
|
| 55 |
+
allow_patterns=filename # Only download the zip file and not the splits we created
|
| 56 |
+
)
|
| 57 |
+
print(f"Dataset downloaded to: {dataset_path}")
|
| 58 |
+
```
|
| 59 |
+
After downloading the file you will see a massage:Dataset containing CVEfixes_v1.0.8.zip downloaded to: /addres you must copy/
|
| 60 |
+
### Step 2: Create a Folder to Extract the Data
|
| 61 |
+
Set up a directory to extract the contents of the ZIP file.
|
| 62 |
+
```
|
| 63 |
+
import os
|
| 64 |
+
extract_dir = "/content/extracted_data"
|
| 65 |
+
os.makedirs(extract_dir, exist_ok=True)
|
| 66 |
+
print(f"Extraction directory created at: {extract_dir}")
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
### Step 3: Decompress and Convert to SQLite Database
|
| 70 |
+
Extract the `.sql.gz` file from the ZIP and convert it into a SQLite database.
|
| 71 |
+
```
|
| 72 |
+
cache_path = "addres you have copied"
|
| 73 |
+
zip_file_path = os.path.join(cache_path, "CVEfixes_v1.0.8.zip")
|
| 74 |
+
!unzip -q "{zip_file_path}" -d "{extract_dir}"
|
| 75 |
+
#Verify extraction
|
| 76 |
+
print("\nExtracted files:")
|
| 77 |
+
!ls -lh "{extract_dir}"
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
|
| 81 |
+
# Decompress the .gz file and convert to SQLite
|
| 82 |
+
```
|
| 83 |
+
!zcat {extract_dir}/CVEfixes_v1.0.8/Data/CVEfixes_v1.0.8.sql.gz | sqlite3 /content/CVEfixes.db
|
| 84 |
+
print("Database created at: /content/CVEfixes.db")
|
| 85 |
+
```
|
| 86 |
+
### Step 4: Explore Tables and Relationships
|
| 87 |
+
Connect to the database and inspect its structure.
|
| 88 |
+
|
| 89 |
+
```
|
| 90 |
+
import sqlite3
|
| 91 |
+
import pandas as pd
|
| 92 |
+
|
| 93 |
+
# Connect to the database
|
| 94 |
+
conn = sqlite3.connect('/content/CVEfixes.db')
|
| 95 |
+
cursor = conn.cursor()
|
| 96 |
+
|
| 97 |
+
# Get all tables
|
| 98 |
+
cursor.execute("SELECT name FROM sqlite_master WHERE type='table';")
|
| 99 |
+
tables = cursor.fetchall()
|
| 100 |
+
print("Tables in the database:", tables)
|
| 101 |
+
|
| 102 |
+
# Display column headers for each table
|
| 103 |
+
for table in tables:
|
| 104 |
+
table_name = table[0]
|
| 105 |
+
print(f"\nHeaders for table '{table_name}':")
|
| 106 |
+
cursor.execute(f"PRAGMA table_info('{table_name}');")
|
| 107 |
+
columns = cursor.fetchall()
|
| 108 |
+
column_names = [col[1] for col in columns]
|
| 109 |
+
print(f"Columns: {column_names}")
|
| 110 |
+
|
| 111 |
+
# Count rows in each table
|
| 112 |
+
for table in tables:
|
| 113 |
+
table_name = table[0]
|
| 114 |
+
cursor.execute(f"SELECT COUNT(*) FROM {table_name}")
|
| 115 |
+
row_count = cursor.fetchone()[0]
|
| 116 |
+
print(f"Table: {table_name}, Rows: {row_count}")
|
| 117 |
+
|
| 118 |
+
conn.close()
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
**Expected Output**:
|
| 122 |
+
```
|
| 123 |
+
Tables in the database: [('fixes',), ('commits',), ('file_change',), ('method_change',), ('cve',), ('cwe',), ('cwe_classification',), ('repository',)]
|
| 124 |
+
|
| 125 |
+
Headers for table 'fixes':
|
| 126 |
+
Columns: ['cve_id', 'hash', 'repo_url']
|
| 127 |
+
|
| 128 |
+
Headers for table 'commits':
|
| 129 |
+
Columns: ['hash', 'repo_url', 'author', 'author_date', 'author_timezone', 'committer', 'committer_date', 'committer_timezone', 'msg', 'merge', 'parents', 'num_lines_added', 'num_lines_deleted', 'dmm_unit_complexity', 'dmm_unit_interfacing', 'dmm_unit_size']
|
| 130 |
+
|
| 131 |
+
[... truncated for brevity ...]
|
| 132 |
+
|
| 133 |
+
Table: fixes, Rows: 12923
|
| 134 |
+
Table: commits, Rows: 12107
|
| 135 |
+
Table: file_change, Rows: 51342
|
| 136 |
+
Table: method_change, Rows: 277948
|
| 137 |
+
Table: cve, Rows: 11873
|
| 138 |
+
Table: cwe, Rows: 272
|
| 139 |
+
Table: cwe_classification, Rows: 12198
|
| 140 |
+
Table: repository, Rows: 4249
|
| 141 |
+
```
|
| 142 |
+
|
| 143 |
+
### Step 5: Retrieve All Distinct CVE IDs
|
| 144 |
+
Extract unique CVE IDs from the `cve` table, which serves as the anchor for the dataset.
|
| 145 |
+
|
| 146 |
+
```
|
| 147 |
+
import sqlite3
|
| 148 |
+
|
| 149 |
+
conn = sqlite3.connect('/content/CVEfixes.db')
|
| 150 |
+
cursor = conn.cursor()
|
| 151 |
+
|
| 152 |
+
cursor.execute("SELECT DISTINCT cve_id FROM cve;")
|
| 153 |
+
cve_ids = [row[0] for row in cursor.fetchall()]
|
| 154 |
+
print(f"Total CVEs found: {len(cve_ids)}")
|
| 155 |
+
|
| 156 |
+
conn.close()
|
| 157 |
+
```
|
| 158 |
+
|
| 159 |
+
### Step 6: Split the CVE IDs
|
| 160 |
+
Randomly shuffle and split the CVE IDs into training (70%), validation (15%), and test (15%) sets.
|
| 161 |
+
|
| 162 |
+
```
|
| 163 |
+
import random
|
| 164 |
+
import json
|
| 165 |
+
|
| 166 |
+
# Shuffle and split the dataset
|
| 167 |
+
random.shuffle(cve_ids)
|
| 168 |
+
n = len(cve_ids)
|
| 169 |
+
train_split = cve_ids[:int(0.70 * n)] # 70% for training
|
| 170 |
+
val_split = cve_ids[int(0.70 * n):int(0.85 * n)] # 15% for validation
|
| 171 |
+
test_split = cve_ids[int(0.85 * n):] # 15% for test
|
| 172 |
+
|
| 173 |
+
# Save the splits to JSON files
|
| 174 |
+
with open('/content/train_split.json', 'w') as f:
|
| 175 |
+
json.dump(train_split, f)
|
| 176 |
+
with open('/content/val_split.json', 'w') as f:
|
| 177 |
+
json.dump(val_split, f)
|
| 178 |
+
with open('/content/test_split.json', 'w') as f:
|
| 179 |
+
json.dump(test_split, f)
|
| 180 |
+
|
| 181 |
+
# Print split sizes
|
| 182 |
+
print("Train count:", len(train_split))
|
| 183 |
+
print("Validation count:", len(val_split))
|
| 184 |
+
print("Test count:", len(test_split))
|
| 185 |
+
```
|
| 186 |
+
|
| 187 |
+
**Expected Output**:
|
| 188 |
+
```
|
| 189 |
+
Total CVEs found: 11873
|
| 190 |
+
Train count: 8311
|
| 191 |
+
Validation count: 1781
|
| 192 |
+
Test count: 1781
|
| 193 |
+
```
|
| 194 |
+
|
| 195 |
+
### Step 7: Process CVEs into JSONL Files
|
| 196 |
+
Define a function to bundle data for each CVE across all tables and write it to JSONL files. Below is an example script to process the training split, skipping problematic CVEs. You can adapt it for validation and test splits by changing the input and output files.
|
| 197 |
+
|
| 198 |
+
```
|
| 199 |
+
import sqlite3
|
| 200 |
+
import json
|
| 201 |
+
import time
|
| 202 |
+
import gc
|
| 203 |
+
import os
|
| 204 |
+
|
| 205 |
+
def dict_factory(cursor, row):
|
| 206 |
+
if cursor.description is None or row is None:
|
| 207 |
+
return None
|
| 208 |
+
return {col[0]: row[idx] for idx, col in enumerate(cursor.description)}
|
| 209 |
+
|
| 210 |
+
def get_cwe_data(cursor, cve_id):
|
| 211 |
+
cursor.execute("""
|
| 212 |
+
SELECT cwe.* FROM cwe
|
| 213 |
+
JOIN cwe_classification ON cwe.cwe_id = cwe_classification.cwe_id
|
| 214 |
+
WHERE cwe_classification.cve_id = ?;
|
| 215 |
+
""", (cve_id,))
|
| 216 |
+
return cursor.fetchall()
|
| 217 |
+
|
| 218 |
+
def get_repository_data(cursor, repo_url, repo_cache):
|
| 219 |
+
if repo_url in repo_cache:
|
| 220 |
+
return repo_cache[repo_url]
|
| 221 |
+
cursor.execute("SELECT * FROM repository WHERE repo_url = ?;", (repo_url,))
|
| 222 |
+
repo_data = cursor.fetchone()
|
| 223 |
+
repo_cache[repo_url] = repo_data
|
| 224 |
+
return repo_data
|
| 225 |
+
|
| 226 |
+
def get_method_changes(cursor, file_change_id):
|
| 227 |
+
cursor.execute("SELECT * FROM method_change WHERE file_change_id = ?;", (file_change_id,))
|
| 228 |
+
return cursor.fetchall()
|
| 229 |
+
|
| 230 |
+
def get_file_changes(cursor, commit_hash):
|
| 231 |
+
cursor.execute("SELECT * FROM file_change WHERE hash = ?;", (commit_hash,))
|
| 232 |
+
file_changes = []
|
| 233 |
+
for fc_row in cursor.fetchall():
|
| 234 |
+
file_change_data = fc_row
|
| 235 |
+
if file_change_data:
|
| 236 |
+
file_change_data['method_changes'] = get_method_changes(cursor, file_change_data['file_change_id'])
|
| 237 |
+
file_changes.append(file_change_data)
|
| 238 |
+
return file_changes
|
| 239 |
+
|
| 240 |
+
def get_commit_data(cursor, commit_hash, repo_url, repo_cache):
|
| 241 |
+
cursor.execute("SELECT * FROM commits WHERE hash = ? AND repo_url = ?;", (commit_hash, repo_url))
|
| 242 |
+
commit_row = cursor.fetchone()
|
| 243 |
+
if not commit_row:
|
| 244 |
+
return None
|
| 245 |
+
commit_data = commit_row
|
| 246 |
+
commit_data['repository'] = get_repository_data(cursor, repo_url, repo_cache)
|
| 247 |
+
commit_data['file_changes'] = get_file_changes(cursor, commit_hash)
|
| 248 |
+
return commit_data
|
| 249 |
+
|
| 250 |
+
def get_fixes_data(cursor, cve_id, repo_cache):
|
| 251 |
+
cursor.execute("SELECT * FROM fixes WHERE cve_id = ?;", (cve_id,))
|
| 252 |
+
fixes = []
|
| 253 |
+
for fix_row in cursor.fetchall():
|
| 254 |
+
fix_data = fix_row
|
| 255 |
+
if fix_data:
|
| 256 |
+
commit_details = get_commit_data(cursor, fix_data['hash'], fix_data['repo_url'], repo_cache)
|
| 257 |
+
if commit_details:
|
| 258 |
+
fix_data['commit_details'] = commit_details
|
| 259 |
+
fixes.append(fix_data)
|
| 260 |
+
return fixes
|
| 261 |
+
|
| 262 |
+
def process_cve(cursor, cve_id, repo_cache):
|
| 263 |
+
cursor.execute("SELECT * FROM cve WHERE cve_id = ?;", (cve_id,))
|
| 264 |
+
cve_row = cursor.fetchone()
|
| 265 |
+
if not cve_row:
|
| 266 |
+
return None
|
| 267 |
+
cve_data = cve_row
|
| 268 |
+
cve_data['cwe_info'] = get_cwe_data(cursor, cve_id)
|
| 269 |
+
cve_data['fixes_info'] = get_fixes_data(cursor, cve_id, repo_cache)
|
| 270 |
+
return cve_data
|
| 271 |
+
|
| 272 |
+
def process_split(split_name, split_file, db_path, output_file):
|
| 273 |
+
print(f"--- Processing {split_name} split ---")
|
| 274 |
+
conn = sqlite3.connect(db_path)
|
| 275 |
+
conn.row_factory = dict_factory
|
| 276 |
+
cursor = conn.cursor()
|
| 277 |
+
repo_cache = {}
|
| 278 |
+
|
| 279 |
+
with open(split_file, 'r') as f:
|
| 280 |
+
cve_ids = json.load(f)
|
| 281 |
+
|
| 282 |
+
skip_cves = ["CVE-2021-3957", "CVE-2024-26152", "CVE-2016-5833", "CVE-2023-6848"]
|
| 283 |
+
with open(output_file, 'w') as outfile:
|
| 284 |
+
for i, cve_id in enumerate(cve_ids):
|
| 285 |
+
if cve_id in skip_cves:
|
| 286 |
+
print(f"Skipping {cve_id} due to memory constraints.")
|
| 287 |
+
continue
|
| 288 |
+
try:
|
| 289 |
+
cve_bundle = process_cve(cursor, cve_id, repo_cache)
|
| 290 |
+
if cve_bundle:
|
| 291 |
+
outfile.write(json.dumps(cve_bundle) + '\n')
|
| 292 |
+
if (i + 1) % 50 == 0:
|
| 293 |
+
print(f"Processed {i + 1}/{len(cve_ids)} CVEs")
|
| 294 |
+
gc.collect()
|
| 295 |
+
except Exception as e:
|
| 296 |
+
print(f"Error processing {cve_id}: {e}")
|
| 297 |
+
continue
|
| 298 |
+
|
| 299 |
+
conn.close()
|
| 300 |
+
gc.collect()
|
| 301 |
+
print(f"Finished processing {split_name} split. Output saved to {output_file}")
|
| 302 |
+
|
| 303 |
+
# Example usage for training split
|
| 304 |
+
process_split(
|
| 305 |
+
split_name="train",
|
| 306 |
+
split_file="/content/train_split.json",
|
| 307 |
+
db_path="/content/CVEfixes.db",
|
| 308 |
+
output_file="/content/train_data.jsonl"
|
| 309 |
+
)
|
| 310 |
+
```
|
| 311 |
+
|
| 312 |
+
**Notes**:
|
| 313 |
+
- Replace `train` with `val` or `test` and adjust file paths to process other splits.
|
| 314 |
+
- The script skips the problematic CVEs listed above.
|
| 315 |
+
- Output is written to a `.jsonl` file, with one JSON object per line.
|
| 316 |
+
|
| 317 |
+
## Preprocessing
|
| 318 |
+
The current splits (`train_data_part1.jsonl`, `train_data_part2.jsonl`, `val_data.jsonl`, `test_data.jsonl`) contain raw data from all tables. Preprocessing (e.g., feature extraction, normalization) will be addressed in subsequent steps depending on your use case.
|
| 319 |
+
|
| 320 |
+
## Copyright and License
|
| 321 |
+
Copyright © 2021-2024 Data-Driven Software Engineering Department (dataSED), Simula Research Laboratory, Norway
|
| 322 |
+
|
| 323 |
+
This work is licensed under the [Creative Commons Attribution 4.0 International License](https://creativecommons.org/licenses/by/4.0/).
|
| 324 |
+
|
| 325 |
+
### Reference
|
| 326 |
+
The original dataset is sourced from:
|
| 327 |
+
|
| 328 |
+
**CVEfixes: Automated Collection of Vulnerabilities and Their Fixes from Open-Source Software**
|
| 329 |
+
Guru Bhandari, Amara Naseer, Leon Moonen
|
| 330 |
+
Simula Research Laboratory, Oslo, Norway
|
| 331 |
+
- Guru Bhandari: guru@simula.no
|
| 332 |
+
- Amara Naseer: amara@simula.no
|
| 333 |
+
- Leon Moonen: leon.moonen@computer.org
|
| 334 |
+
|
| 335 |
+
For more details, refer to the original publication at `https://dl.acm.org/doi/10.1145/3475960.3475985`.
|
| 336 |
+
```
|