RooseBERT-cont-cased

RooseBERT is a domain-specific BERT-based language model pre-trained on English political debates and parliamentary speeches. It is designed to capture the distinctive features of political discourse, including domain-specific terminology, implicit argumentation, and strategic communication patterns.

This variant β€” cont-cased β€” was trained via continued pre-training (CONT) of bert-base-cased, initialising from its original weights and vocabulary and training for additional steps on the political debate corpus. This allows the model to leverage BERT's general language understanding while adapting its representations to the political domain. The cased variant preserves capitalisation, making it sensitive to distinctions between proper nouns, acronyms, and common words β€” particularly relevant in political text where named entities and institutional references are abundant.

πŸ“„ Paper: RooseBERT: A New Deal For Political Language Modelling
πŸ’» GitHub: https://github.com/deborahdore/RooseBERT


Model Details

Property Value
Architecture BERT-base (encoder-only)
Training approach Continued pre-training (CONT) from bert-base-cased
Vocabulary BERT standard cased WordPiece (28,997 tokens)
Hidden size 768
Attention heads 12
Hidden layers 12
Max position embeddings 512
Training steps 150K
Batch size 2048
Learning rate 3e-4 (linear warmup + decay)
Warmup steps 10,000
Weight decay 0.01
Training objective Masked Language Modelling (MLM, 15% mask rate)
Hardware 8Γ— NVIDIA A100 GPUs
Training time ~24 hours
Frameworks HuggingFace Transformers, DeepSpeed ZeRO-2, FP16

The CONT approach initialises from bert-base-cased's pre-trained weights and continues training on the political debate corpus, retaining BERT's standard vocabulary. This means the model benefits from BERT's broad linguistic knowledge while adapting its contextual representations to the political domain β€” without the overhead of training a new tokenizer or initialising weights from random. CONT models require fewer training steps (150K vs. 250K for SCR) and can be trained in approximately 24 hours on 8Γ— A100 GPUs.


Training Data

RooseBERT was pre-trained on 11GB of English political debate transcripts spanning 1919–2025, drawn from:

Source Coverage Size
African Parliamentary Debates (Ghana & South Africa) 1999–2024 573 MB
Australian Parliamentary Debates 1998–2025 1 GB
Canadian Parliamentary Debates 1994–2025 1.1 GB
European Parliamentary Debates (EUSpeech) 2007–2015 110 MB
Irish Parliamentary Debates 1919–2019 ~3.4 GB
New Zealand Parliamentary Debates (ParlSpeech) 1987–2019 791 MB
Scottish Parliamentary Debates (ParlScot) –2021 443 MB
UK House of Commons Debates 1979–2019 2.6 GB
UN General Debate Corpus (UNGDC) 1946–2023 186 MB
UN Security Council Debates (UNSC) 1992–2023 387 MB
US Presidential & Primary Debates 1960–2024 16 MB

All datasets were sourced from authoritative, official political settings. Pre-processing removed hyperlinks, markup tags, and collapsed whitespace.


Intended Use

RooseBERT is intended as a base model for fine-tuning on downstream NLP tasks related to political discourse analysis. It is especially well-suited for:

  • Sentiment Analysis of parliamentary speeches and debates
  • Stance Detection (support/oppose classification)
  • Argument Component Detection and Classification (claims and premises)
  • Argument Relation Prediction and Classification (support/attack/no-relation)
  • Motion Policy Classification
  • Named Entity Recognition in political texts

The CONT-cased variant is a strong general-purpose choice for political NLP tasks. It achieves the lowest perplexity of all RooseBERT variants (cased or uncased), reflecting the strongest overall adaptation to the political debate domain. It is particularly recommended for tasks where capitalisation carries semantic weight, such as NER, or where compatibility with the standard BERT vocabulary is required.


How to Use

from transformers import AutoTokenizer, AutoModelForMaskedLM

tokenizer = AutoTokenizer.from_pretrained("ddore14/RooseBERT-cont-cased")
model = AutoModelForMaskedLM.from_pretrained("ddore14/RooseBERT-cont-cased")

For fine-tuning on a downstream classification task:

from transformers import AutoTokenizer, AutoModelForSequenceClassification

tokenizer = AutoTokenizer.from_pretrained("ddore14/RooseBERT-cont-cased")
model = AutoModelForSequenceClassification.from_pretrained(
    "ddore14/RooseBERT-cont-cased",
    num_labels=2
)

# Recommended fine-tuning hyperparameters (from paper):
# learning_rate ∈ {2e-5, 3e-5, 5e-5}
# batch_size ∈ {8, 16, 32}
# epochs ∈ {2, 3, 4}

Evaluation Results

RooseBERT was evaluated across 10 datasets covering 6 downstream tasks. Results below are for RooseBERT-cont-cased (Macro F1 unless noted).

Task Dataset Metric RooseBERT-cont-cased BERT-base-cased
Sentiment Analysis ParlVote Accuracy 0.79 0.69
Sentiment Analysis HanDeSeT Accuracy 0.74 0.67
Stance Detection ConVote Accuracy 0.76 0.72
Stance Detection AusHansard Accuracy 0.63 0.54
Arg. Component Det. & Class. ElecDeb60to20 Macro F1 0.63 0.61
Arg. Component Det. & Class. ArgUNSC Macro F1 0.62 0.61
Arg. Relation Pred. & Class. ElecDeb60to20 Macro F1 0.61 0.58
Arg. Relation Pred. & Class. ArgUNSC Macro F1 0.70 0.57
Motion Policy Classification ParlVote+ Macro F1 0.62 0.54
NER NEREx Macro F1 0.90 0.92

RooseBERT-cont-cased outperforms BERT-base-cased on 9 out of 10 tasks, with the strongest gains on stance detection (+9% on AusHansard), motion policy classification (+8%), and sentiment analysis (+7% on HanDeSeT). NER performance is comparable to BERT, as the NEREx dataset uses general rather than politically specific entity categories. Results are averaged over 5 runs with different random seeds.

Perplexity on held-out political debate data:

Model Perplexity (cased)
BERT-base-cased 22.11
PoliBERTweet 154.42
ConfliBERT-scr-cased 4.66
ConfliBERT-cont-cased 4.37
RooseBERT-scr-cased 2.80
RooseBERT-cont-cased 2.61

RooseBERT-cont-cased achieves the lowest perplexity of all models evaluated, including all cased and uncased variants, indicating the strongest adaptation to the political debate domain.


Available Variants

Model Training Casing HuggingFace ID
RooseBERT-cont-cased (this model) Continued pre-training Cased ddore14/RooseBERT-cont-cased
RooseBERT-cont-uncased Continued pre-training Uncased ddore14/RooseBERT-cont-uncased
RooseBERT-scr-cased From scratch Cased ddore14/RooseBERT-scr-cased
RooseBERT-scr-uncased From scratch Uncased ddore14/RooseBERT-scr-uncased

CONT (continued pre-training) models inherit BERT's standard vocabulary and pre-trained weights, requiring fewer training steps. SCR (from scratch) models use a custom political vocabulary that encodes domain-specific terms as single tokens (e.g., deterrent, bureaucrat, statutorily). Cased models preserve capitalisation; uncased models lowercase all input.


Limitations

  • RooseBERT is trained exclusively on English political debates. Cross-lingual use is not supported.
  • The model may reflect biases present in official political speech, including over-representation of certain geopolitical perspectives.
  • Because CONT models retain BERT's standard vocabulary, domain-specific political terms may still be split into sub-tokens (e.g., deterrent β†’ ['de', '##ter', '##rent']). For richer domain vocabulary encoding, consider the SCR variants.
  • Performance on NER tasks does not benefit meaningfully from domain-specific pre-training when entity categories are general rather than politically specific.
  • As with all encoder-only models, RooseBERT is best suited to classification and labelling tasks rather than generation.

Citation

If you use RooseBERT in your research, please cite:

@article{dore2025roosebert,
  title={RooseBERT: A New Deal For Political Language Modelling},
  author={Dore, Deborah and Cabrio, Elena and Villata, Serena},
  journal={arXiv preprint arXiv:2508.03250},
  year={2025}
}

Acknowledgements

This work was supported by the French government through the 3IA CΓ΄te d'Azur programme (ANR-23-IACL-0001). Computing resources were provided by GENCI at IDRIS (grant 2026-AD011016047R1) on the Jean Zay supercomputer.

Downloads last month
219
Safetensors
Model size
0.1B params
Tensor type
F16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Collection including ddore14/RooseBERT-cont-cased

Paper for ddore14/RooseBERT-cont-cased