COMBO UD 2.17 Models
Collection
88 items • Updated
This is a English-language model based on COMBO-NLP, an open-source natural language preprocessing system. It performs:
The English model uses FacebookAI/xlm-roberta-base as its base encoder and is trained on UD_English-EWT (UD v2.17).
Evaluation was performed on the UD_English-EWT test split using the standard CoNLL 2018 eval script.
Two evaluation rows are reported:
| Metric | Tokens | Sentences | Words | UPOS | XPOS | UFeats | AllTags | Lemmas |
|---|---|---|---|---|---|---|---|---|
| Full-text (F1) | 98.86 | 83.79 | 98.57 | 96.59 | 96.20 | 96.77 | 95.39 | 96.85 |
| Aligned accuracy | 0.00 | 0.00 | 0.00 | 97.99 | 97.59 | 98.17 | 96.78 | 98.25 |
| Metric | UAS | LAS | CLAS | MLAS | BLEX |
|---|---|---|---|---|---|
| Full-text (F1) | 91.40 | 89.80 | 87.27 | 84.32 | 85.46 |
| Aligned accuracy | 92.73 | 91.10 | 88.57 | 85.58 | 86.74 |
Install the library from PyPI (assuming you have a virtual environment created):
pip install combo-nlp
Install the Lambo segmenter - only needed when passing raw text strings to COMBO:
pip install --index-url https://pypi.clarin-pl.eu/ lambo
from combo import COMBO
# Load a pre-trained model with corresponding Lambo segmenter
nlp = COMBO("English")
# Parse raw text (handles sentence splitting + tokenization)
result = nlp("The quick brown fox jumps over the lazy dog.")
# Inspect results
for sentence in result:
for token in sentence:
print(f"{token.form:<15} {token.lemma:<15} {token.upos:<8} head={token.head} {token.deprel}")
Refer to the COMBO-NLP documentation for installation and usage instructions: