Abstract
Large language models (LLMs) have revolutionized various scientific fields in the past few years, thanks to their generative and extractive abilities. However, their applications in the Agricultural Extension (AE) domain remain sparse and limited due to the unique challenges of unstructured agricultural data. Furthermore, mainstream LLMs excel at general and open-ended tasks but struggle with domain-specific tasks. We proposed a novel QA benchmark dataset, AgXQA, for the AE domain to address these issues. We trained and evaluated our domain-specific LM, AgRoBERTa, which outperformed other mainstream encoder- and decoder- LMs, on the extractive QA downstream task by achieving an EM score of 55.15% and an F1 score of 78.89%. Besides automated metrics, we also introduced a custom human evaluation metric, AgEES, which confirmed AgRoBERTa’s performance, as demonstrated by a 94.37% agreement rate with expert assessments, compared to 92.62% for GPT 3.5. Notably, we conducted a comprehensive qualitative analysis, whose results provide further insights into the weaknesses and strengths of both domain-specific and general LMs when evaluated on in-domain NLP tasks. Thanks to this novel dataset and specialized LM, our research enhanced further development of specialized LMs for the agriculture domain as a whole and AE in particular, thus fostering sustainable agricultural practices through improved extractive question answering.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.