COMBO UD 2.17 Models
Collection
88 items • Updated
This is a German-language model based on COMBO-NLP, an open-source natural language preprocessing system. It performs:
The German model uses FacebookAI/xlm-roberta-base as its base encoder and is trained on UD_German-HDT (UD v2.17).
Evaluation was performed on the UD_German-HDT test split using the standard CoNLL 2018 eval script.
Two evaluation rows are reported:
| Metric | Tokens | Sentences | Words | UPOS | XPOS | UFeats | AllTags | Lemmas |
|---|---|---|---|---|---|---|---|---|
| Full-text (F1) | 99.86 | 98.10 | 99.86 | 98.46 | 98.40 | 93.74 | 93.41 | 98.39 |
| Aligned accuracy | 0.00 | 0.00 | 0.00 | 98.60 | 98.54 | 93.88 | 93.54 | 98.53 |
| Metric | UAS | LAS | CLAS | MLAS | BLEX |
|---|---|---|---|---|---|
| Full-text (F1) | 97.38 | 96.68 | 94.87 | 84.99 | 92.56 |
| Aligned accuracy | 97.52 | 96.81 | 95.04 | 85.14 | 92.74 |
Install the library from PyPI (assuming you have a virtual environment created):
pip install combo-nlp
Install the Lambo segmenter - only needed when passing raw text strings to COMBO:
pip install --index-url https://pypi.clarin-pl.eu/ lambo
from combo import COMBO
# Load a pre-trained model with corresponding Lambo segmenter
nlp = COMBO("German")
# Parse raw text (handles sentence splitting + tokenization)
result = nlp("Der schnelle braune Fuchs springt über den faulen Hund.")
# Inspect results
for sentence in result:
for token in sentence:
print(f"{token.form:<15} {token.lemma:<15} {token.upos:<8} head={token.head} {token.deprel}")
Refer to the COMBO-NLP documentation for installation and usage instructions: