Abstract

Abstract Large language models (LLMs) have transformed open-domain abstractive summarization, delivering coherent and precise summaries. However, their adaptability to user knowledge levels is largely unexplored. This study investigates LLMs’ efficacy in tailoring summaries to user familiarity. We assess various LLM architectures across different familiarity settings using metrics like linguistic complexity and reading grade levels. Findings expose current capabilities and constraints in knowledge-aware summarization, paving the way for personalized systems. We analyze LLM performance across three familiarity levels: none, basic awareness, and complete familiarity. Utilizing established readability metrics, we gauge summary complexity. Results indicate LLMs can adjust summaries to some extent based on user familiarity. Yet, challenges persist in accurately assessing user knowledge and crafting informative, comprehensible summaries. We highlight areas for enhancement, including improved user knowledge modeling and domain-specific integration. This research informs the advancement of adaptive summarization systems, offering insights for future development.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.