cpt-en-large

Table of Contents

  1. Model Summary
  2. Usage
  3. Training
  4. Evaluation
  5. License
  6. Citation

Model Summary

cpt-en-large is the Large variant of our English biomedical encoder, built by continued pretraining of ModernBERT-large using a CLM detour recipe. Instead of standard MLM continued pretraining, we temporarily switch to causal language modeling (CLM) before returning to MLM.

cpt-en-large achieves 78.7% average F1 across 11 English biomedical benchmarks, the highest overall score, outperforming both the MLM baseline (+0.8pp, 7/11 task wins) and all other models.

Architecture ModernBERT (FlashAttention, RoPE, alternating local/global attention, unpadding)
Parameters 396M
Layers 28
Hidden size 1024
Attention heads 16
Context length 8,192 tokens
Language English
Base model answerdotai/ModernBERT-large

Usage

You can use this model with the transformers library (v4.48.0+):

pip install -U transformers>=4.48.0

If your GPU supports it, install Flash Attention for best efficiency:

pip install flash-attn

Masked Language Modeling

from transformers import AutoTokenizer, AutoModelForMaskedLM

model_id = "rntc/cpt-en-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForMaskedLM.from_pretrained(model_id)

text = "The patient was diagnosed with [MASK] and started on antibiotics."
inputs = tokenizer(text, return_tensors="pt")
outputs = model(**inputs)

masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
predicted_token_id = outputs.logits[0, masked_index].argmax(axis=-1)
predicted_token = tokenizer.decode(predicted_token_id)
print("Predicted token:", predicted_token)

Fine-tuning (Classification, NER, etc.)

from transformers import AutoTokenizer, AutoModel

model_id = "rntc/cpt-en-large"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModel.from_pretrained(model_id)

text = "The patient presented with acute myocardial infarction and was treated with percutaneous coronary intervention."
inputs = tokenizer(text, return_tensors="pt", max_length=8192, truncation=True)
outputs = model(**inputs)
# outputs.last_hidden_state: [batch, seq_len, 1024]

Note: cpt-en-large does not use token type IDs. You can omit the token_type_ids parameter.

Training

Data

Corpus Proportion Description
PubMed 60% Biomedical abstracts
Med-Inst 20% Medical instructions
MIMIC 20% Clinical notes
Total 50B tokens Single epoch

Methodology

cpt-en-large is trained in two phases, initialized from ModernBERT-large:

  • Phase 1 โ€” CLM detour (50B tokens): The bidirectional attention mask is replaced with a causal mask, and the model is trained with next-token prediction. This dense training signal (100% of positions) deeply modifies early transformer layers for domain adaptation.
  • Phase 2 โ€” MLM decay (5B tokens): Bidirectional attention is restored, and the model is trained with masked language modeling at 15% masking. The learning rate decays from peak to 10% following a 1-sqrt schedule.

Both phases use the same data mix. Training used AdamW (lr=2e-4, beta1=0.9, beta2=0.98), bf16 mixed precision, global batch size of 384 sequences (~3.1M tokens), on 4x H100 GPUs with Composer.

Why a CLM Detour?

CLM supervises every token position, producing dense gradient updates that deeply modify early transformer layers. These changes persist through the MLM decay phase โ€” a phenomenon we call computational hysteresis. The Large model retains 67.2% CKA divergence from its MLM counterpart (vs 56.5% for Base), showing that hysteresis scales with model capacity. The CLM benefit also widens at Large scale: +0.8pp (Large) vs +0.3pp (Base). See our paper for the full mechanistic analysis.

Evaluation

English biomedical benchmark results (11 tasks, 5 seeds per model):

Clinical Tasks

Model Ctx ChemProt Phenotype COS Social Hist. DEID Avg
cpt-en-large 8192 90.4 61.3 94.7 56.5 84.2 77.4
MLM baseline Large (ours) 8192 90.5 61.0 94.9 55.0 82.3 76.7
BioClinical-ModernBERT-base 8192 90.0 60.7 94.8 56.0 81.8 76.7
PubMedBERT 512 90.2 52.0 95.0 48.7 80.4 73.3

BigBIO Tasks

Model Ctx AnatEM BC5CDR JNLPBA NCBI GAD HoC Avg
cpt-en-large 8192 83.2 89.8 75.3 81.7 79.7 69.3 79.8
MLM baseline Large (ours) 8192 82.0 89.4 75.5 81.8 76.4 67.8 78.8
BioClinical-ModernBERT-base 8192 79.2 88.7 74.8 78.7 75.8 67.0 77.4
PubMedBERT 512 83.3 89.7 74.9 82.1 79.3 71.0 80.1

Overall

Model Clinical BigBIO Overall
cpt-en-large 77.4 79.8 78.7
MLM baseline Large (ours) 76.7 78.8 77.9
cpt-en-base 76.9 78.9 78.0
BioClinical-ModernBERT-base 76.7 77.4 77.0
PubMedBERT 73.3 80.1 77.0

cpt-en-large achieves the highest overall score (78.7%), with the CLM benefit widening at Large scale (+0.8pp vs +0.3pp for Base). The model sets new state-of-the-art on DEID (84.2%), AnatEM (83.2%), and GAD (79.7%).

Intended Use

This model is designed for English biomedical and clinical NLP tasks:

  • Named entity recognition (diseases, chemicals, genes, anatomy)
  • Document classification (clinical phenotyping, relation extraction)
  • De-identification of clinical notes
  • Information extraction from PubMed abstracts and clinical reports

The 8,192-token context is important for long clinical documents. The Large size provides improved performance over Base, particularly on NER tasks (AnatEM, DEID, GAD), at the cost of higher compute requirements.

Limitations

  • Trained on English biomedical text; not suitable for other languages without further adaptation. See cpt-fr-base for French.
  • Encoder model: produces contextualized representations, does not generate text.
  • Clinical text may contain sensitive patterns; users are responsible for compliance with applicable regulations (HIPAA, etc.).
  • Training data includes MIMIC clinical notes, which are de-identified but derived from real patient records.

License

Apache 2.0

Citation

@inproceedings{anonymous2026clm,
  title={Under review},
  author={Anonymous},
  booktitle={Under review},
  year={2026}
}

Acknowledgments

Downloads last month
24
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for rntc/cpt-en-large

Finetuned
(257)
this model