한국어 언어모델의 정치편향성 측정
Received: Mar 01, 2026 ; Revised: Mar 31, 2026 ; Accepted: Apr 13, 2026
Published Online: Apr 30, 2026
ABSTRACT
This study audits the political orientations of seven instruction-tuned Korean large language models (LLMs) amid expanding sovereign-AI deployment. Diverging from Western-centric benchmarks, we evaluate these models using three localized instruments: The Community test, the Hankr Political Compass, and the JoongAng Ilbo’s 2025 Political Orientation Test. Results reveal substantial cross-model dispersion, with no model remaining entirely neutral. While economic orientations generally lean moderately left, social and cultural positions vary widely. Notably, this variation correlates more with developer type and release period than parameter size, suggesting that institutional contexts, training data, and alignment practices leave distinct political fingerprints. Ultimately, this reproducible, Korea-specific audit framework establishes a baseline for evaluating LLM political bias and informs context-sensitive alignment strategies for sovereign AI development.
References
1.
5.
6.





