Small Language Models for Curriculum-based Guidance

Konstantinos Katharakis, Sippo Rossi, Raghava Rao Mukkamala

Research output: Chapter in Book/Report/Conference proceedingConference contributionScientificpeer-review

Abstract

The adoption of generative AI and large language models (LLMs) in education is still emerging. In this study, we explore the development and evaluation of AI teaching assistants that provide curriculum-based guidance using a retrieval-augmented generation (RAG) pipeline applied to selected open-source small language models (SLMs). We benchmarked eight SLMs, including LLaMA 3.1, IBM Granite 3.3, and Gemma 3 (7–17B parameters), against GPT-4o. Our findings show
that with proper prompting and targeted retrieval, SLMs can match LLMs in delivering accurate, pedagogically aligned responses. Importantly, SLMs offer significant sustainability benefits due to their lower computational and energy requirements, enabling real-time use on consumer-grade hardware without
depending on cloud infrastructure. This makes them not only cost-effective and privacy-preserving but also environmentally responsible, positioning them as viable AI teaching assistants for educational institutions aiming to scale personalized learning in a sustainable and energy-efficient manner.
Original languageEnglish
Title of host publicationProceedings of the 59th Hawaii International Conference on System Sciences
Number of pages10
Publication date06.01.2026
Pages1075-1084
Publication statusPublished - 06.01.2026
MoE publication typeA4 Article in conference proceedings

Fingerprint

Dive into the research topics of 'Small Language Models for Curriculum-based Guidance'. Together they form a unique fingerprint.

Cite this