--- license: apache-2.0 tags: - babylm - minimal-pairs - BLiMP - syntactic-evaluation pretty_name: BLiSS size_categories: - 1M 0.7) - Avoids modifying at the same position as original learner error - Length change limits (±30%) ## Expected Results When reconstructed correctly, you should get: - **Write&Improve Individual**: ~63,926 minimal pairs with ~27% artificial error success rate - **EFCamDat**: Variable by language/proficiency level - **FCE**: Variable by first language ## Citation If you use this toolkit or the reconstructed dataset, please cite our paper: ## Troubleshooting ### Common Issues 1. **Missing dependencies**: Install all required packages and spaCy model 2. **File not found**: Verify you have the correct Write&Improve 2024 data files 3. **Low success rates**: The precise approach may have lower success rates (~27%) by design 4. **Memory issues**: Use high-coverage approach or process files individually ### File Structure Expected ``` your_data_directory/ ├── en-writeandimprove2024-final-versions-train-sentences.orig ├── en-writeandimprove2024-final-versions-train-sentences.corr ├── en-writeandimprove2024-final-versions-train-sentences.m2 ├── en-writeandimprove2024-final-versions-train-sentences.ids ├── en-writeandimprove2024-final-versions-dev-sentences.orig ├── en-writeandimprove2024-final-versions-dev-sentences.corr ├── en-writeandimprove2024-final-versions-dev-sentences.m2 ├── en-writeandimprove2024-final-versions-dev-sentences.ids └── en-writeandimprove2024-final-versions-m2-essay-info.tsv ``` ### Success Rate Verification The artificial error generation success rates should approximately match: - **Precise approach**: ~27% for Write&Improve - **High-coverage approach**: ~45-55% for Write&Improve Lower rates may indicate missing dependencies or data issues. ## License This toolkit is released under [MIT License]. The original datasets maintain their respective licenses and redistribution terms. ## Contact For questions about the toolkit or dataset reconstruction, please open an issue or contact [sas245@cam.ac.uk]. ## Cite the BLiSS Paper! ``` Yuan Gao, Suchir Salhan, Andrew Caines, Paula Buttery, and Weiwei Sun. 2025. BLiSS: Evaluating Bilingual Learner Competence in Second Language Small Language Models. In Proceedings of the First BabyLM Workshop, pages 160–174, Suzhou, China. Association for Computational Linguistics. ``` ``` @inproceedings{gao-etal-2025-bliss, title = "{BL}i{SS}: Evaluating Bilingual Learner Competence in Second Language Small Language Models", author = "Gao, Yuan and Salhan, Suchir and Caines, Andrew and Buttery, Paula and Sun, Weiwei", editor = "Charpentier, Lucas and Choshen, Leshem and Cotterell, Ryan and Gul, Mustafa Omer and Hu, Michael Y. and Liu, Jing and Jumelet, Jaap and Linzen, Tal and Mueller, Aaron and Ross, Candace and Shah, Raj Sanjay and Warstadt, Alex and Wilcox, Ethan Gotlieb and Williams, Adina", booktitle = "Proceedings of the First BabyLM Workshop", month = nov, year = "2025", address = "Suzhou, China", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2025.babylm-main.13/", doi = "10.18653/v1/2025.babylm-main.13", pages = "160--174", ISBN = "TODO", abstract = "Cross-lingual extensions of the BabyLM Shared Task beyond English incentivise the development of Small Language Models that simulate a much wider range of language acquisition scenarios, including code-switching, simultaneous and successive bilingualism and second language acquisition. However, to our knowledge, there is no benchmark of the formal competence of cognitively-inspired models of L2 acquisition, or \textbf{L2LMs}. To address this, we introduce a \textbf{Benchmark of Learner Interlingual Syntactic Structure (BLiSS)}. BLiSS consists of 1.5M naturalistic minimal pairs dataset derived from errorful sentence{--}correction pairs in parallel learner corpora. These are systematic patterns {--}overlooked by standard benchmarks of the formal competence of Language Models {--} which we use to evaluate L2LMs trained in a variety of training regimes on specific properties of L2 learner language to provide a linguistically-motivated framework for controlled measure of the interlanguage competence of L2LMs." } ``` --- **Note**: This toolkit reconstructs the dataset used in our research. The exact numbers may vary slightly due to preprocessing differences, but should be within ~1-2% of reported figures. https://huggingface.co/papers/2510.19419