Sentiment Analysis
Collection
Standards Sentiment Analysis and Cross-Individual Sentiment Analysis models • 3 items • Updated
This model performs Cross-Individual Sentiment Analysis (CISA) on historical Turkish texts (1900-1950), analyzing the author's sentiment toward specific individuals mentioned in the text, rather than the overall text sentiment.
Text: "Ali Bey'in vefatı bizleri elem-i azîme sevk etmişti, onunla müşterek mesaimiz mevcuttu." (Ali Bey's death filled us all with sadness)
| Analysis Type | Result | Explanation |
|---|---|---|
| Standard SA | ❌ Negative | Overall text tone is sad |
| CISA | ✅ Positive | Author's respect/love for Ali Bey |
| Metric | Value |
|---|---|
| Accuracy | 87.08% |
| Precision | 87.07% |
| Recall | 87.08% |
| F1-Score | 87.05% |
pip install transformers torch huggingface_hub
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("dbbiyte/CISA-BERTurk-sentiment")
model = AutoModel.from_pretrained("dbbiyte/CISA-BERTurk-sentiment", trust_remote_code=True)
text = "Ali Bey'in vefatı bizleri elem-i azîme sevk etmişti, onunla müşterek mesaimiz mevcuttu."
entity_text = "Ali Bey"
entity_start = text.index(entity_text) # 0
entity_end = entity_start + len(entity_text) # 7
result = model.predict(text, entity_text, entity_start, entity_end, tokenizer)
print(result["sentiment_label"]) # → "Positive"
print(result["sentiment_probs"]) # → [0.04, 0.11, 0.85]
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("dbbiyte/CISA-BERTurk-sentiment")
model = AutoModel.from_pretrained("dbbiyte/CISA-BERTurk-sentiment", trust_remote_code=True)
model.eval()
# Her örnek: (text, entity_text, entity_start, entity_end)
samples = [
("Ali Bey'in vefatı bizleri elem-i azîme sevk etmişti, onunla müşterek mesaimiz mevcuttu.",
"Ali Bey", 0, 7),
("Leyla Hanım'ın musiki resitalinde, nağmelerinin ruhuma işledi.",
"Leyla Hanım", 0, 11),
("Paşa'nın emirleri hiçbir zaman yerinde değildi.",
"Paşa", 0, 4),
]
for text, ent_text, ent_start, ent_end in samples:
result = model.predict(text, ent_text, ent_start, ent_end, tokenizer)
print(f"Entity : {ent_text}")
print(f"Sentiment: {result['sentiment_label']} "
f"(conf: {max(result['sentiment_probs']):.2f})")
print()
{
"sentiment": 2, # 0=Negative, 1=Neutral, 2=Positive
"sentiment_label": "Positive",
"sentiment_probs": [0.04, 0.11, 0.85], # [neg, neu, pos]
"relation": 1, # 0=Indirect, 1=Direct
"relation_probs": [0.12, 0.88]
}
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModel.from_pretrained(
"dbbiyte/CISA-BERTurk-sentiment",
trust_remote_code=True
).to(device)
result = model.predict(text, entity_text, entity_start, entity_end, tokenizer, device=device)
Note:
trust_remote_code=Trueis required because this model uses a custom DECA-EBSA architecture (modeling_cisa.py) hosted in the repository. The code is fully auditable atdbbiyte/CISA-BERTurk-sentiment/blob/main/modeling_cisa.py.
### Expected CISA Results
For the examples in our test set:
| Text | Entity | Standard SA | CISA Result |
|------|--------|-------------|-------------|
| "Ali Bey'in vefatı hepimizi hüzne boğmuştu, onunla senelerce müşterek mesaimiz mevcuttu." | Ali Bey | Negative | **Positive** |
| "Leyla Hanım'ın musiki resitalinde, nağmelerinin ruhuma işledi" | Leyla Hanım | Positive | **Positive** |
**CISA Key Insight**: The model analyzes the author's sentiment toward the mentioned person, not the overall text sentiment.
## 🏗️ DECA-EBSA Architecture
### Dual-Encoder Structure:
1. **Text Encoder**: Full text context processing
2. **Entity Encoder**: Entity + local context processing
### Key Features:
- **Enhanced Entity-Context Attention**: 12-head cross-attention
- **Position-Aware Modeling**: Entity position information
- **Turkish Linguistic Features**: Ottoman Turkish specific patterns
- **Context-Aware Classification**: Formal/informal distinction
- **Adaptive Focal Loss**: Focus on difficult examples
- **R-Drop Regularization**: Consistency enforcement
## 🔬 Research Contributions
### 1. Cross-Individual Sentiment Analysis (CISA)
- **First application** of CISA to historical Turkish
- **Author perspective** focused sentiment analysis
- **Entity-based approach** for person-specific emotions
### 2. DECA-EBSA Methodology
- **Dual-Encoder** architecture
- **Context-Aware** modeling
- **Entity-Based** attention mechanisms
### 3. Historical Turkish NLP Contributions
- **1900-1950 period** specialized dataset
- **Ottoman Turkish** linguistic features
- **Formal/informal** context distinction
## 👥 Authors
**İzmir Institute of Technology - Digital Humanities and AI Laboratory**:
- **Dr. Mustafa İLTER** - İzmir Institute of Technology
- **Dr. Doğan EVECEN** - İzmir Institute of Technology
- **Dr. Buket ERŞAHİN** - İzmir Institute of Technology
- **Dr. Yasemin ÖZCAN GÖNÜLAL** - İzmir Institute of Technology
- **Assoc. Prof.. Selma TEKİR** - İzmir Institute of Technology
**Pamukkale University**:
- **Assoc. Prof. Sezen KARABULUT** - Pamukkale University
- **İbrahim BERCİ** - Pamukkale University
- **Emre ONUÇ** - Pamukkale University
## 🏦 Funding & Acknowledgments
This work was supported by **The Scientific and Technological Research Council of Turkey (TÜBİTAK)** under project number **323K372**. We thank TÜBİTAK for their support.
## 📚 BERTurk Reference
This model uses [BERTurk](https://github.com/stefan-it/turkish-bert) developed by Stefan Schweter, a BERT model pre-trained on 35GB of Turkish text, optimized for Turkish natural language processing tasks.
## 📄 License and Usage Terms
This model is released under **Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)** license.
### ✅ Permitted Uses:
- **Academic research** (citation required)
- **Educational purposes**
- **Non-profit projects**
- **Personal experimental studies**
### ❌ Prohibited Uses:
- **Commercial applications**
- **Profit-driven projects**
- **Commercial product/service development**
### 📄 Citation Requirement:
When using this model, please cite as:
```bibtex
@misc{ilter2025cisa,
author = {İlter, Mustafa and Evecen, Doğan and Erşahin, Buket and Özcan Gönülal, Yasemin and Karabulut, Sezen and Berci, İbrahim and Onuç, Emre and Tekir, Selma},
title = {CISA-BERTurk-Sentiment: Cross-Individual Sentiment Analysis for Historical Turkish},
howpublished = {Deep Learning Model},
publisher = {Hugging Face},
url = {https://huggingface.co/dbbiyte/CISA-BERTurk-sentiment},
doi = {10.57967/hf/6142},
year = {2025},
}
turkish sentiment-analysis historical-texts entity-based cross-individual berturk bert 1900-1950 pytorch safetensors