Abstract

This paper proposes the utilization of large language models as recommendation systems for museum visitors. Since the aforementioned models lack the notion of context, they cannot work with temporal information that is often present in recommendations for cultural environments (e.g., special exhibitions or events). In this respect, the current work aims to enhance the capabilities of large language models through a fine-tuning process that incorporates contextual information and user instructions. The resulting models are expected to be capable of providing personalized recommendations that are aligned with user preferences and desires. More specifically, Generative Pre-trained Transformer 4, a knowledge-based large language model is fine-tuned and turned into a context-aware recommendation system, adapting its suggestions based on user input and specific contextual factors such as location, time of visit, and other relevant parameters. The effectiveness of the proposed approach is evaluated through certain user studies, which ensure an improved user experience and engagement within the museum environment.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.