Abstract

Large Language Models (LLMs) are being used for many language-based tasks, including translation, summarization and paraphrasing, sentiment analysis, and for content-generation tasks, such as code generation, answering search queries in natural language, and to power chatbots in customer service and other domains. Since much modern lexicography is based on investigation and analysis of large-scale corpora analogous to the (much larger) corpora used to train LLMs, we hypothesize that LLMs could be used for typical lexicographic tasks. A commercially-available LLM API (OpenAI’s ChatGPT gpt-3.5-turbo) was used to complete typical lexicographic tasks, such as headword expansion, phrase and form finding, and creation of definitions and examples. The results showed that the output of this LLM is not up to the standard of human editorial work, requiring significant oversight because of errors and “hallucinations” (the tendency of LLMs to invent facts). In addition, the externalities of LLM use, including concerns about environmental impact and replication of bias, add to the overall cost.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.