Fine-Tuning Large Language Model (LLM) for Chatbot with Additional Data Sources

Authors

  • Herlawati Herlawati Universitas Bhayangkara Jakarta Raya
  • Rahmadya Trias Handayanto

DOI:

https://doi.org/10.33558/piksel.v13i1.10832

Keywords:

LLMs, Pretrained Model, Hugging Face, Chatbot, Transformer, Student Registration Site

Abstract

Currently, Large Language Models (LLMs) are gaining popularity in implementation and research, with numerous open-source models available for use. One notable example is the AI-powered chat application, which leverages pre-trained LLMs to provide accurate and relevant information to users. By utilizing fine-tuning technology, this model can be tailored to specific student registration data, making it easier for prospective students to access the necessary information. Research findings indicate that this model achieves high accuracy in providing answers based on the inputted information. One of its advantages is its ability to generate training data through a Llama-based chat application, resulting in a more interactive and engaging user experience.

Downloads

Download data is not yet available.

Downloads

Published

2025-03-31

How to Cite

Herlawati, H., & Handayanto, R. T. (2025). Fine-Tuning Large Language Model (LLM) for Chatbot with Additional Data Sources. PIKSEL : Penelitian Ilmu Komputer Sistem Embedded and Logic, 13(1), 125–132. https://doi.org/10.33558/piksel.v13i1.10832

Issue

Section

Articles