KORMo: Korean Open Reasoning Model for Everyone
An open-source hub for Korean language data and model research
π§ Open Models
- KORMo-Team/KORMo-tokenizer β A tokenizer optimized for bilingual (KoreanβEnglish) language representation
- KORMo-Team/KORMo-10B-base β The KORMo-10B pretrained model trained on large-scale Korean and English corpora
- KORMo-Team/KORMo-10B-sft β A fine-tuned model enhanced with long-context reasoning and instruction-following data
- KORMo-Team/KORMo-10B-inst β Final instruction-tuned model with reasoning enhancement and RL (Coming soon; currently awaiting GPU availability)
π‘ You can explore the full training history and checkpoints in each modelβs Revisions
tab on Hugging Face.
π Links
π About KORMo
KORMo is an open research initiative dedicated to advancing Korean language understanding and generation through large-scale, fully open-source models and datasets.
We aim to make Korean NLP research transparent, reproducible, and accessible to the global community.