cpt-fr-large
Table of Contents
Model Summary
cpt-fr-large is the Large variant of our French biomedical encoder, built by continued pretraining of ModernCamemBERT-large using a CLM detour recipe. Instead of standard MLM continued pretraining, we temporarily switch to causal language modeling (CLM) before returning to MLM. This produces lasting representational changes that improve downstream biomedical performance by +1.1pp across 8 French biomedical tasks, winning all 8.
| Architecture | ModernBERT |
| Parameters | 350M |
| Layers | 28 |
| Hidden size | 1024 |
| Attention heads | 16 |
| Context length | 8,192 tokens |
| Language | French |
| Base model | almanach/moderncamembert-large |
Usage
You can use this model with the transformers library (v4.48.0+):
pip install -U transformers>=4.48.0
If your GPU supports it, install Flash Attention for best efficiency:
pip install flash-attn
Masked Language Modeling
from transformers import AutoTokenizer, AutoModelForMaskedLM
model_id = "rntc/cpt-fr-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(model_id)
text = "Le patient présente une <mask> aiguë du myocarde."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)
masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1)
predicted_token = tokenizer.decode(predicted_token_id)
print("Predicted token:", predicted_token)
Fine-tuning (Classification, NER, etc.)
from transformers import AutoTokenizer, AutoModel
model_id = "rntc/cpt-fr-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id)
text = "Compte rendu d'hospitalisation du patient admis pour décompensation cardiaque."
inputs = tokenizer(text, return_tensors="pt", max_length=8192, truncation=True)
outputs = model(**inputs)
# outputs.last_hidden_state: [batch, seq_len, 1024]
Note: cpt-fr-large does not use token type IDs. You can omit the token_type_ids parameter.
Training
Data
| Corpus | Tokens | Description |
|---|---|---|
| MC-Bio | 7B | Quality-filtered French biomedical text (scientific articles, drug leaflets, clinical guidelines) |
| MCQA | 2B | Medical question-answer pairs |
| E3C | 400M | Clinical cases from journals and theses |
| EMEA | 600M | Pharmaceutical documents (European Medicines Agency) |
| Total | 10B |
Methodology
cpt-fr-large is trained in two phases, initialized from ModernCamemBERT-large:
- Phase 1 — CLM detour (25B tokens): The bidirectional attention mask is replaced with a causal mask, and the model is trained with next-token prediction. This dense training signal (100% of positions) deeply modifies early transformer layers for domain adaptation.
- Phase 2 — MLM decay (2.5B tokens): Bidirectional attention is restored, and the model is trained with masked language modeling at 15% masking. The learning rate decays from peak to 10% following a 1-sqrt schedule.
Both phases use the same data mix. Training used AdamW (lr=2e-4, beta1=0.9, beta2=0.98), bf16 mixed precision, global batch size of 384 sequences (~3.1M tokens), on 4x H100 GPUs with Composer.
Why a CLM Detour?
CLM supervises every token position, producing dense gradient updates that deeply modify early transformer layers. These changes persist through the MLM decay phase — a phenomenon we call computational hysteresis. The Large model retains 67.2% CKA divergence from its MLM counterpart, compared to 56.5% for Base, showing that hysteresis scales with model capacity. See our paper for the full mechanistic analysis.
Evaluation
French biomedical benchmark results (8 tasks, 9 seeds per model, macro-averaged F1):
| Model | Ctx | FrACCO-30 | FrACCO-100 | CANTEMIST | DISTEMIST | MedDialog | DiaMed | EMEA | Medline | Avg |
|---|---|---|---|---|---|---|---|---|---|---|
| cpt-fr-large | 8192 | 80.7 | 65.4 | 74.4 | 30.4 | 64.5 | 64.8 | 67.0 | 59.5 | 63.3 |
| MLM baseline Large (ours) | 8192 | 79.4 | 63.3 | 72.6 | 29.1 | 64.5 | 64.8 | 66.1 | 58.0 | 62.2 |
| cpt-fr-base | 8192 | 74.8 | 60.1 | 71.0 | 25.5 | 63.6 | 67.4 | 65.9 | 58.2 | 60.8 |
| ModernCamemBERT | 8192 | 70.1 | 55.3 | 63.3 | 20.2 | 60.6 | 56.4 | 63.4 | 55.3 | 55.6 |
| DrBERT | 512 | 53.0 | 35.6 | 37.9 | 21.4 | 63.6 | 57.0 | 68.0 | 62.3 | 49.9 |
cpt-fr-large achieves 63.3% average F1, the highest score among French biomedical models (+1.1pp over MLM Large, +2.5pp over Base CLM, 8/8 task wins).
Intended Use
This model is designed for French biomedical and clinical NLP tasks:
- Named entity recognition (diseases, chemicals, procedures)
- Document classification (clinical specialties, ICD coding)
- Multilabel classification on long clinical documents
- Information extraction from clinical reports, drug leaflets, and scientific articles
The 8,192-token context is critical for long clinical documents (discharge summaries, oncology reports) that are truncated by 512-token models. The Large size provides improved performance over Base at the cost of higher compute requirements.
Limitations
- Trained on French biomedical text; not suitable for other languages without further adaptation.
- Encoder model: produces contextualized representations, does not generate text.
- Clinical text may contain sensitive patterns; users are responsible for compliance with applicable regulations.
License
Apache 2.0
Citation
@inproceedings{anonymous2026clm,
title={Under review},
author={Anonymous},
booktitle={Under review},
year={2026}
}
Acknowledgments
- Downloads last month
- 28