Abstract

Meaning representation is an important component of semantic parsing. Although researchers have designed a lot of meaning representations, recent work focuses on only a few of them. Thus, the impact of meaning representation on semantic parsing is less understood. Furthermore, existing work's performance is often not comprehensively evaluated due to the lack of readily-available execution engines. Upon identifying these gaps, we propose~\benchmarkname{}, a new unified benchmark on meaning representations, by integrating existing semantic parsing datasets, completing the missing logical forms, and implementing the missing execution engines. The resulting unified benchmark contains the complete enumeration of logical forms and execution engines over three datasets $\times$ four meaning representations. A thorough experimental study on Unimer reveals that neural semantic parsing approaches exhibit notably different performance when they are trained to generate different meaning representations. Also, program alias and grammar rules heavily impact the performance of different meaning representations. Our benchmark, execution engines and implementation can be found on: https://github.com/JasperGuo/Unimer.

Highlights

  • A remarkable vision of artificial intelligence is to enable human interactions with machines through natural language

  • We propose UNIMER, a new unified benchmark on meaning representations, by integrating and completing semantic parsing datasets in three datasets × four MRs; we implement six execution engines so that executionmatch accuracy can be evaluated in all cases;

  • Description Shuffle expressions in Select, From, Where, and Having clauses Express Argmax/min with OrderBy and Limit clause instead of subquery Replace In clause with Join clause parsing, we first experiment with the two neural approaches described in Section 2.2 on UNIMER, and we compare the resulting performance of different MRs with two metrics: exact-match accuracy,4 and execution-match accuracy

Read more

Summary

Introduction

A remarkable vision of artificial intelligence is to enable human interactions with machines through natural language. Semantic parsing aims to transform a natural language utterance into a logic form, i.e., a formal, machine-interpretable meaning representation (MR) (Zelle and Mooney, 1996; Dahl et al, 1994).. We can observe that while Lambda Calculus is intensively studied, the other MRs have not been sufficiently studied. This biased evaluation is partly caused by the absence of target logic forms in the missing cells. Existing work often compares the performance on different MRs directly (Sun et al, 2020; Shaw et al, 2019; Chen et al, 2020) without considering the confounding edge bases, instead of ungrounded semantic parsing

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.