cpt-fr-base

Table of Contents

  1. Model Summary
  2. Usage
  3. Training
  4. Evaluation
  5. License
  6. Citation

Model Summary

cpt-fr-base is a French biomedical encoder built by continued pretraining of ModernCamemBERT using a CLM detour recipe. Instead of standard MLM continued pretraining, we temporarily switch to causal language modeling (CLM) before returning to MLM. This produces lasting representational changes in early transformer layers that improve downstream biomedical performance by +2.9pp on average across 8 French biomedical tasks.

The model uses the ModernBERT architecture with FlashAttention, rotary positional embeddings (RoPE), alternating local/global attention, and unpadding, supporting 8,192-token context — critical for long clinical documents that exceed the 512-token limit of previous French biomedical models.

Architecture ModernBERT
Parameters 150M
Layers 22
Hidden size 768
Attention heads 12
Context length 8,192 tokens
Language French
Base model almanach/moderncamembert-base

Usage

You can use this model with the transformers library (v4.48.0+):

pip install -U transformers>=4.48.0

If your GPU supports it, install Flash Attention for best efficiency:

pip install flash-attn

Masked Language Modeling

from transformers import AutoTokenizer, AutoModelForMaskedLM

model_id = "rntc/cpt-fr-base"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(model_id)

text = "Le patient présente une <mask> aiguë du myocarde."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)

masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1)
predicted_token = tokenizer.decode(predicted_token_id)
print("Predicted token:", predicted_token)

Fine-tuning (Classification, NER, etc.)

from transformers import AutoTokenizer, AutoModel

model_id = "rntc/cpt-fr-base"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id)

text = "Compte rendu d'hospitalisation du patient admis pour décompensation cardiaque."
inputs = tokenizer(text, return_tensors="pt", max_length=8192, truncation=True)
outputs = model(**inputs)
# outputs.last_hidden_state: [batch, seq_len, 768]

Note: cpt-fr-base does not use token type IDs. You can omit the token_type_ids parameter.

Training

Data

Corpus Tokens Description
MC-Bio 7B Quality-filtered French biomedical text (scientific articles, drug leaflets, clinical guidelines)
MCQA 2B Medical question-answer pairs
E3C 400M Clinical cases from journals and theses
EMEA 600M Pharmaceutical documents (European Medicines Agency)
Total 10B

Methodology

cpt-fr-base is trained in two phases, initialized from ModernCamemBERT:

  • Phase 1 — CLM detour (10B tokens): The bidirectional attention mask is replaced with a causal mask, and the model is trained with next-token prediction. This dense training signal (100% of positions) deeply modifies early transformer layers for domain adaptation.
  • Phase 2 — MLM decay (1B tokens): Bidirectional attention is restored, and the model is trained with masked language modeling at 15% masking. The learning rate decays from peak to 10% following a 1-sqrt schedule.

Both phases use the same data mix. Training used AdamW (lr=2e-4, beta1=0.9, beta2=0.98), bf16 mixed precision, global batch size of 384 sequences (~3.1M tokens), on 4x H100 GPUs with Composer.

Why a CLM Detour?

CLM supervises every token position, producing dense gradient updates that deeply modify early transformer layers (layers 0-7). These changes persist through the MLM decay phase — a phenomenon we call computational hysteresis. We provide causal evidence through freeze interventions: freezing early layers during CLM eliminates the downstream benefit entirely, while freezing mid layers preserves it (double dissociation). See our paper for the full mechanistic analysis.

Evaluation

French biomedical benchmark results (8 tasks, 9 seeds per model, macro-averaged F1):

Model Ctx FrACCO-30 FrACCO-100 CANTEMIST DISTEMIST MedDialog DiaMed EMEA Medline Avg
cpt-fr-base 8192 74.8 60.1 71.0 25.5 63.6 67.4 65.9 58.2 60.8
MLM baseline (ours) 8192 69.9 56.8 64.9 23.5 62.5 63.4 65.4 56.8 57.9
ModernCamemBERT 8192 70.1 55.3 63.3 20.2 60.6 56.4 63.4 55.3 55.6
DrBERT 512 53.0 35.6 37.9 21.4 63.6 57.0 68.0 62.3 49.9
CamemBERT-bio 512 41.9 20.1 12.8 9.6 38.6 47.7 61.6 56.6 36.1

cpt-fr-base outperforms the matched MLM baseline on all 8 tasks (+2.9pp, binomial p=0.004).

Intended Use

This model is designed for French biomedical and clinical NLP tasks:

  • Named entity recognition (diseases, chemicals, procedures)
  • Document classification (clinical specialties, ICD coding)
  • Multilabel classification on long clinical documents
  • Information extraction from clinical reports, drug leaflets, and scientific articles

The 8,192-token context is critical for long clinical documents (discharge summaries, oncology reports) that are truncated by 512-token models.

Limitations

  • Trained on French biomedical text; not suitable for other languages without further adaptation.
  • Encoder model: produces contextualized representations, does not generate text.
  • Clinical text may contain sensitive patterns; users are responsible for compliance with applicable regulations.

License

Apache 2.0

Citation

@inproceedings{anonymous2026clm,
  title={Under review},
  author={Anonymous},
  booktitle={Under review},
  year={2026}
}

Acknowledgments

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for rntc/cpt-fr-base

Finetuned
(3)
this model

Dataset used to train rntc/cpt-fr-base