Classification of URL Citations on Scholarly Papers Using Intermediate Task Training

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Classification of URL Citations on Scholarly Papers Using Intermediate Task Training

Similar Papers
  • Book Chapter
  • Cite Count Icon 6
  • 10.1007/978-3-642-23160-5_9
Advances in Deep Parsing of Scholarly Paper Content
  • Jan 1, 2011
  • Ulrich Schäfer + 1 more

We report on advances in deep linguistic parsing of the full textual content of 8200 papers from the ACL Anthology, a collection of electronically available scientific papers in the fields of Computational Linguistics and Language Technology. We describe how - by incorporating new techniques - we increase both speed and robustness of deep analysis, specifically on long sentences where deep parsing often failed in former approaches. With the current open source HPSG (Head-driven phrase structure grammar) for English (ERG), we obtain deep parses for more than 85% of the sentences in the 1.5 million sentences corpus, while the former approaches achieved only approx. 65% coverage. The resulting sentence-wise semantic representations are used in the Scientist's Workbench, a platform demonstrating the use and benefit of natural language processing (NLP) to support scientists or other knowledge workers in fast and better access to digital document content. With the generated NLP annotations, we are able to implement important, novel applications such as robust semantic search, citation classification, and (in the future) question answering and definition exploration.

  • Conference Article
  • Cite Count Icon 3
  • 10.18653/v1/2022.wiesp-1.2
Classification of URL Citations in Scholarly Papers for Promoting Utilization of Research Artifacts
  • Jan 1, 2022
  • Masaya Tsunokake + 1 more

Classification of URL Citations in Scholarly Papers for Promoting Utilization of Research Artifacts

  • Conference Article
  • 10.1145/3677389.3702570
Citation-Worthy Detection of URL Citations in Scholarly Papers
  • Dec 16, 2024
  • Kazuhiro Wada + 2 more

Citation-Worthy Detection of URL Citations in Scholarly Papers

  • Research Article
  • Cite Count Icon 3
  • 10.26795/2307-1281-2018-6-3-5
CORPORATE PROGRAM OF TRAINING "ENGLISH IN THE SPHERE OF PROFESSIONAL ACTIVITY" IN MODERN EDUCATIONAL ENVIRONMENT: OBJECTIVES, CONTENTS AND RESULTS
  • Nov 10, 2018
  • Vestnik of Minin University
  • A N Shamov + 1 more

Introduction: the higher school educational establishments require new pedagogical staff having a good command of a foreign language.New staff members are to be able toconduct the lessons in their subject not only in the native language, but also in foreign (English) language.Materials and methods: within the conducted research general scientific methods of knowledge and also special methods of a research were used.Results: psychological and linguo-didactic features of training of adults in a foreign language are defined; educational and methodical materials and manuals on training of adults in professionally focused English are studied, the objectives and tasks are formulated; content of training in the "English for Professional Activity" program is determined, specifications are made to the content of levels of language competence in relation to the contingent of trainees, appraisal and diagnostic tools are developed.Discussion and Conclusions: following the results of the conducted research the training program in English is developed for corporate training. In the program intermediate and ultimate objectives and tasks of training are formulated; the minimum of special lexicon is selected and organized by textual and topic principle; grammatical structures for lecturing in language and for understanding of special professionally focused texts are selected; effective technologies for assimilation of language among adult students are recommended.

  • PDF Download Icon
  • Conference Article
  • 10.18653/v1/2022.findings-naacl.156
Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text Correspondence
  • Jan 1, 2022
  • Myeongjun Jang + 2 more

The logical negation property (LNP), which implies generating different predictions for semantically opposite inputs, is an important property that a trustworthy language model must satisfy. However, much recent evidence shows that large-size pre-trained language models (PLMs) do not satisfy this property. In this paper, we perform experiments using probing tasks to assess PLM's LNP understanding. Unlike previous studies that only examined negation expressions, we expand the boundary of the investigation to lexical semantics. Through experiments, we observe that PLMs violate the LNP frequently. To alleviate the issue, we propose a novel intermediate training task, names meaning-matching, designed to directly learn a meaning-text correspondence, instead of relying on the distributional hypothesis. Through multiple experiments, we find that the task enables PLMs to learn lexical semantic information. Also, through fine-tuning experiments on 7 GLUE tasks, we confirm that it is a safe intermediate task that guarantees a similar or better performance of downstream tasks. Finally, we observe that our proposed approach outperforms our previous counterparts despite its time and resource efficiency.

  • Video Transcripts
  • 10.48448/bkyy-y589
Beyond Distributional Hypothesis: Let Language Models Learn Meaning-Text Correspondence
  • Jun 27, 2022
  • Myeongjun Jang

The logical negation property (LNP), which implies generating different predictions for semantically opposite inputs (p is true iff ¬p is false), is an important property that a trustworthy language model must satisfy. However, much recent evidence shows that large-size pre-trained language models (PLMs) do not satisfy this property. In this paper, we perform experiments using probing tasks to assess PLMs’ LNP understanding. Unlike previous studies that only examined negation expressions, we expand the boundary of the investigation to lexical semantics. Through experiments, we observe that PLMs violate the LNP frequently. To alleviate the issue, we propose a novel intermediate training task, named meaning-matching, designed to directly learn a meaning text correspondence, instead of relying on the distributional hypothesis. Through multiple experiments, we find that the task enables PLMs to learn lexical semantic information. Also, through fine-tuning experiments on 7 GLUE tasks, we confirm that it is a safe intermediate task that guarantees a similar or better performance of downstream tasks. Finally, we observe that our proposed approach outperforms our previous counterparts despite its time and resource efficiency.

  • Single Report
  • Cite Count Icon 2
  • 10.21236/ada077955
Predicting Tank Gunnery Performance
  • Feb 1, 1978
  • Newell K Eaton

: The specific objectives of this research were to evaluate the relationship between gunner's and tank commander's: (1) scores on a series of potentially predictive paper-and-pencil tests and main-gun Tank Table performance; (2) intermediate tank gunnery training task performance and main- gun Tank Table performance; (3) composite scores and their performance on the main-gun Tank Tables ; and (4) to evaluate the relationship between aptitude test scores and driver performance as measured by driver's ranking within their platoon.

  • Research Article
  • Cite Count Icon 21
  • 10.1103/physrevlett.132.100602
Generative Quantum Machine Learning via Denoising Diffusion Probabilistic Models.
  • Mar 5, 2024
  • Physical Review Letters
  • Bingzhi Zhang + 3 more

Deep generative models are key-enabling technology to computer vision, text generation, and large language models. Denoising diffusion probabilistic models (DDPMs) have recently gained much attention due to their ability to generate diverse and high-quality samples in many computer vision tasks, as well as to incorporate flexible model architectures and a relatively simple training scheme. Quantum generative models, empowered by entanglement and superposition, have brought new insight to learning classical and quantum data. Inspired by the classical counterpart, we propose the quantum denoising diffusion probabilistic model (QuDDPM) to enable efficiently trainable generative learning of quantum data. QuDDPM adopts sufficient layers of circuits to guarantee expressivity, while it introduces multiple intermediate training tasks as interpolation between the target distribution and noise to avoid barren plateau and guarantee efficient training. We provide bounds on the learning error and demonstrate QuDDPM's capability in learning correlated quantum noise model, quantum many-body phases, and topological structure of quantum data. The results provide a paradigm for versatile and efficient quantum generative learning.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.