| Front matter
[pdf] [bib]
|
pages |
Bias in, Bias out: Annotation Bias in Multilingual Large Language Models Xia Cui, Ziyi Huang and Naeemeh Adel
[pdf] [bib] [optional] [supplementary]
|
pp. 1‑16 |
Freeze and Reveal: Exposing Modality Bias in Vision-Language Models Vivek Hruday Kavuri, Vysishtya Karanam Karanam, Venkamsetty Venkata Jahnavi, Kriti Madumadukala, Balaji Lakshmipathi Darur and Ponnurangam Kumaraguru
[pdf] [bib]
|
pp. 17‑26 |
AnthroSet: a Challenge Dataset for Anthropomorphic Language Detection Dorielle Lonke, Jelke Bloem and Pia Sommerauer
[pdf] [bib]
|
pp. 27‑39 |
FLARE: An Error Analysis Framework for Diagnosing LLM Classification Failures Keerthana Madhavan, Luiza Antonie and Stacey Scott
[pdf] [bib]
|
pp. 40‑44 |
BuST: A Siamese Transformer Model for AI Text Detection in Bulgarian Andrii Maslo and Silvia Gargova
[pdf] [bib]
|
pp. 45‑52 |
F*ck Around and Find Out: Quasi-Malicious Interactions with LLMs as a Site of Situated Learning Sarah ONeill
[pdf] [bib]
|
pp. 53‑58 |
<think> So let’s replace this phrase with insult... </think> Lessons learned from generation of toxic texts with LLMs Sergey Pletenev, Alexander Panchenko and Daniil Moskovskiy
[pdf] [bib]
|
pp. 59‑63 |
Anthropomorphizing AI: A Multi-Label Analysis of Public Discourse on Social Media Muhammad Owais Raza and Areej Fatemah Meghji
[pdf] [bib]
|
pp. 64‑73 |
Multilingual != Multicultural: Evaluating Gaps Between Multilingual Capabilities and Cultural Alignment in LLMs Jonathan Hvithamar Rystrøm, Hannah Rose Kirk and Scott Hale
[pdf] [bib]
|
pp. 74‑85 |
Learn, Achieve, Predict, Propose, Forget, Suffer: Analysing and Classifying Anthropomorphisms of LLMs Matthew Shardlow, Ashley Williams, Charlie Roadhouse, Filippos Karolos Ventirozos and Piotr Przybyła
[pdf] [bib]
|
pp. 86‑94 |
Leveraging the Scala type system for secure LLM-generated code Alexander Sternfeld, Ljiljana Dolamic and Andrei Kucharavy
[pdf] [bib] [optional] [supplementary]
|
pp. 95‑103 |
Last modified on October 6, 2025, 11:09 a.m.