nherrer1 Machlovi commited on
Commit
9e07e2d
·
0 Parent(s):

Duplicate from Machlovi/Hatebase

Browse files

Co-authored-by: Naseem Machlovi <Machlovi@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: tweet
5
+ dtype: string
6
+ - name: category
7
+ dtype: string
8
+ - name: data
9
+ dtype: string
10
+ - name: class
11
+ dtype: string
12
+ splits:
13
+ - name: train
14
+ num_bytes: 34225882
15
+ num_examples: 236738
16
+ - name: test
17
+ num_bytes: 3789570
18
+ num_examples: 26313
19
+ download_size: 20731348
20
+ dataset_size: 38015452
21
+ configs:
22
+ - config_name: default
23
+ data_files:
24
+ - split: train
25
+ path: data/train-*
26
+ - split: test
27
+ path: data/test-*
28
+ ---
29
+ # Combined Dataset
30
+
31
+ This dataset contains tweets classified into various categories with an additional moderator label to indicate safety.
32
+
33
+ ## Features
34
+
35
+ - **tweet**: The text of the tweet.
36
+ - **class**: The category of the tweet (e.g., `neutral`, `hatespeech`, `counterspeech`).
37
+ - **data**: Additional information about the tweet.
38
+ - **moderator**: A label indicating if the tweet is `safe` or `unsafe`.
39
+
40
+ ## Usage
41
+
42
+ This dataset is intended for training models in text classification, hate speech detection, or sentiment analysis.
43
+
44
+ ## Licensing
45
+
46
+ This dataset is licensed under the [MIT License](https://opensource.org/licenses/MIT).
47
+
48
+
49
+ ### Hatebase data set has been curated from multiple benchmark datasets and converted into binary class problem.
50
+ These are the following benchmark dataset:
51
+ HateXplain : Converted hate,offensive, neither into binary Classification
52
+ Peace Violence :Converted Peace and Violence, 4 classes into binary Classification
53
+ Hate Offensive : Converted hate,offensive, neither into binary Classification
54
+ OWS
55
+ Go Emotion
56
+ CallmeSexistBut.. : Binary classification along with toxicity score
57
+ Slur : Based on slur, multiclass problem (DEG,NDEG,HOM, APPR)
58
+ Stormfront : Whitesupermacist forum with Binary Classification
59
+ UCberkley_HS : Multilclass hatespeech, counter hs or neutral (It has continuous score for eac class which is converted in our case)
60
+ BIC (Each of 3 class has categorical score which is converted into binary using a threshold of 0.5) offensive, intent and lewd (sexual) -->
61
+
62
+
63
+ train example: 222196
64
+ test examples: 24689
65
+
66
+ ## Example
67
+
68
+ ```python
69
+ from datasets import load_dataset
70
+
71
+ dataset = load_dataset("machlovi/combined-dataset")
72
+ print(dataset['train'][0])
73
+ ```
74
+
75
+
76
+ # [HateBase]
77
+
78
+ This resource accompanies our paper accepted in the **Late Breaking Work** track of **HCI International 2025**.
79
+
80
+ 📄 **Paper Title:** _Towards Safer AI Moderation: Evaluating LLM Moderators Through a Unified Benchmark Dataset and Advocating a Human-First Approach_
81
+ 📍 **Conference:** HCI International 2025 – Late Breaking Work
82
+ 🔗 [Link to Proceedings](https://2025.hci.international/proceedings.html)
83
+ 📄 [Link to Paper](https://doi.org/10.48550/arXiv.2508.07063)
84
+
85
+
86
+
87
+
88
+ ---
89
+
90
+ ## ✨ Description
91
+
92
+ As AI systems become more integrated into daily life, the need for safer and more reliable moderation has never been greater. Large Language Models (LLMs) have demonstrated
93
+ remarkable capabilities, surpassing earlier models in complexity and performance. Their evaluation across diverse tasks has consistently showcased their potential, enabling the development of adaptive and personalized agents. However, despite these
94
+ advancements, LLMs remain prone to errors, particularly in areas requiring nuanced moral reasoning. They struggle with detecting implicit hate, offensive language, and gender biases due to the subjective and context-dependent nature of these
95
+ issues. Moreover, their reliance on training data can inadvertently reinforce societal biases, leading to inconsistencies and ethical concerns in their outputs. To explore the limitations of LLMs in this role, we developed an experimental framework based
96
+ on state-of-the-art (SOTA) models to assess human emotions and offensive behaviors. The framework introduces a unified benchmark dataset encompassing 49 distinct categories spanning the wide spectrum of human emotions, offensive and hateful
97
+ text, and gender and racial biases. Furthermore, we introduced SafePhi, a QLoRA fine-tuned version of Phi-4, adapting diverse ethical contexts and outperforming benchmark moderators by achieving a Marco F1 score of 0.89, where OpenAI Moderator
98
+ and Llama Guard score 0.77 and 0.74, respectively. This research also highlights the critical domains where LLM moderators consistently underperformed, pressing the need to incorporate more heterogeneous and representative data with human-in-theloop, for better model robustness and explainability.
99
+
100
+ ## 🚀 Usage
101
+
102
+ [Code snippets or sample usage if it's a model or dataset.]
103
+
104
+ ## 📖 Citation
105
+
106
+ ```bibtex
107
+ @misc{machlovi2025saferaimoderationevaluating,
108
+ title={Towards Safer AI Moderation: Evaluating LLM Moderators Through a Unified Benchmark Dataset and Advocating a Human-First Approach},
109
+ author={Naseem Machlovi and Maryam Saleki and Innocent Ababio and Ruhul Amin},
110
+ year={2025},
111
+ eprint={2508.07063},
112
+ archivePrefix={arXiv},
113
+ primaryClass={cs.AI},
114
+ url={https://arxiv.org/abs/2508.07063},
115
+ }
data/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:bbc5201720abb1834e06ae69982ffd334e2631e5f099f238822ccf2c0e26044c
3
+ size 2066629
data/train-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5cca867ba157c603b0a1d40c191ddcf560a524d5d09444f8537a659571d4d8ea
3
+ size 18664719