Comprehensive research on semantic understanding, applicability, and impact analysis of legal provisions based on deep learning and natural language processing
Comprehensive research on semantic understanding, applicability, and impact analysis of legal provisions based on deep learning and natural language processing
3
- 10.1007/s11196-024-10157-9
- Apr 27, 2024
- International Journal for the Semiotics of Law - Revue internationale de Sémiotique juridique
27
- 10.4337/9781788972826.00017
- May 14, 2021
4
- 10.3390/electronics13040764
- Feb 15, 2024
- Electronics
5
- 10.1109/access.2023.3333946
- Jan 1, 2024
- IEEE Access
- 10.4236/ojml.2025.152016
- Jan 1, 2025
- Open Journal of Modern Linguistics
1
- 10.14569/ijacsa.2024.0150424
- Jan 1, 2024
- International Journal of Advanced Computer Science and Applications
7
- 10.1111/lapo.12164
- Apr 1, 2021
- Law & Policy
1
- 10.1007/s10115-024-02077-8
- Mar 14, 2024
- Knowledge and Information Systems
- 10.47852/bonviewjcce52024104
- Apr 8, 2025
- Journal of Computational and Cognitive Engineering
67
- 10.48161/qaj.v1n2a40
- Mar 31, 2021
- Qubahan Academic Journal
- Research Article
- 10.51584/ijrias.2024.908065
- Jan 1, 2024
- International Journal of Research and Innovation in Applied Science
This work aims to develop and analyze deep learning and natural language processing systems in the context of medical information processing. The amount of data created about patients in the healthcare system is always increasing. The human review of this enormous volume of data derived from numerous sources is expensive and takes a lot of time. Additionally, during a patient visit, doctors write down the patient’s medical encounter and send it to nurses and other medical departments for processing. Often, the doctor doesn’t have enough time to record every observation made while examining the patient and asking about their medical history which takes time for a medical diagnosis to be made. The manual review of this vast amount of data generated from multiple sources is costly and very time-consuming. It brings huge challenges while attempting to review this data meaningfully. Therefore, the goal of this research is to create a system that will address the aforementioned issues. The suggested method extracts voice data from medical encounters and converts it to text using Deep Learning (DL) and Natural Language Processing (NLP) techniques. More so, the system developed will improve medical intelligence processing by using deep learning to analyze medical datasets and produce results of a diagnosis, assisting medical professionals at various levels in making realistic, intelligent decisions in real-time regarding crucial health issues. The system was designed using the Object-Oriented Analysis and Design Methodology (OOADM), and the user interfaces were put into place utilizing Natural Language Processing techniques, particularly speech recognition and natural language comprehension. Speech recognition allows for the taking of free text notes, which can drastically cut down on the amount of time medical staff spends on labor-in the tensive clinical recording. By extracting different pieces of data for medical diagnosis and producing results in a matter of seconds, a deep learning algorithm demonstrates a significant capacity to construct clinical decision support systems. The system’s results demonstrate that the deep learning algorithm enabled medical intelligence to be 96.7 percent accurate.
- Research Article
- 10.52783/jes.1704
- Apr 4, 2024
- Journal of Electrical Systems
The modern artwork analysis display system, empowered by natural language processing (NLP) technology, revolutionizes the way audiences interact with and understand art. By integrating NLP algorithms, this system offers a dynamic and user-friendly platform for analyzing and displaying artwork. Utilizing NLP, visitors can engage in interactive conversations with the system, asking questions or making inquiries about the artwork on display. The system processes these inquiries, extracting relevant information from curated databases and scholarly sources to provide insightful and context-rich responses. Additionally, NLP algorithms can analyze textual descriptions, artist statements, and critical reviews to offer nuanced interpretations and historical context for each artwork. This paper presents the design and implementation of an innovative modern artwork analysis and display system, leveraging deep learning and natural language processing (NLP) technology, integrated with Multi-Feature Extraction Fuzzy Classification (MFEFC). The system offers a comprehensive platform for analyzing and presenting modern artworks, enhancing user engagement and understanding. Deep learning algorithms are employed to extract high-level features from visual artworks, allowing for automatic recognition of artistic styles, genres, and themes. Concurrently, NLP techniques process textual descriptions, artist biographies, and critical reviews to provide contextual information and interpretative insights. The integration of MFEFC enables precise classification of artworks based on multiple features extracted from both visual and textual sources, facilitating accurate analysis and categorization. Simulation of the NLP techniques demonstrated an average precision of 90% in extracting relevant contextual information from textual descriptions and artist biographies. Furthermore, MFEFC achieved a classification accuracy of 88% in categorizing artworks based on combined visual and textual features.
- Preprint Article
- 10.5194/egusphere-egu25-16268
- Mar 15, 2025
The advent of extensive digital datasets coupled with advancements in artificial intelligence (AI) is revolutionizing our ability to extract meaningful insights from complex patterns in natural sciences. In this context, the targeted classification of textual descriptions, particularly those detailing the granulometry of unconsolidated sediments or the fracturing state of rock masses, combining supervised deep learning and natural language processing (NLP) is a promising method to refine large-scale geological and hydrogeological models by enriching them with increased data volume.Several databases are replete with qualitative geological data such as borehole logs, which, while abundant, are not readily assimilated into quantitative hydrogeological modeling due to the extensive time required to process the written descriptions into operationally significant units like hydrofacies. This conversion typically necessitates expert analysis of each report but can be expedited through the application of NLP techniques rooted in AI.The primary objectives of this research are twofold: (i) to develop a robust classification model that leverages geological descriptions alongside grain size data, and (ii) to standardize a vast array of sparse and heterogeneous stratigraphic log data for integration into large-scale hydrogeological applications.The Po River alluvial plain in northern Italy (45,700 km²) serves as the pilot area for this study due to the homogeneous shallow subsurface geology, the dense borehole coverage and the availability of a pre-labelled training set. This research demonstrates the conversion of qualitative geological information from a very large dataset of stratigraphic logs (encompassing 387,297 text descriptions from 39,265 boreholes), into a dataset of semi-quantitative information. This transformation, primed for hydrogeological modeling, is facilitated by an operational classification system using a deep learning-based NLP algorithm to categorize complex geological and lithostratigraphic text descriptions according to grain size-based hydrofacies. A supervised text classification algorithm, founded on a Long-Short Term Memory (LSTM) architecture was meticulously developed, trained and validated using 86,611 pre-labelled entries encompassing all sediment types within the study region. The word embedding technique enhanced the model accuracy and learning efficiency by quantifying the semantic distances among geological terms.The outcome of this work is a novel dataset of semi-quantitative hydrogeological information, boasting a classification model accuracy of 97.4%. This dataset was incorporated into expansive modeling frameworks, enabling the assignment of hydrogeological parameters based on grain size data, integrating the uncertainty stemming from misclassification. This has markedly increased the spatial density of available information from 0.34 data points/km² to 8.7 data points/km². The study findings align closely with the existing literature, offering a robust spatial reconstruction of hydrofacies at different scales. This has significant implications for groundwater research, particularly in the realm of quantitative modeling at a regional scale.
- Research Article
- 10.52783/cana.v32.5163
- Apr 24, 2025
- Communications on Applied Nonlinear Analysis
With the increased prevalence of social media, depression is becoming a prominent health issue worldwide and it has opened new opportunities for researchers to explore new techniques for depression detection in social media texts. According to the World Health Organization (WHO) depression may lead to various mental health diseases and suicide if not detected at an early stage. Deep learning and Natural Language Processing are becoming widely adopted techniques among researchers. This review provides a thorough examination of these techniques for depression detection in posts of users. These techniques are harnessed to identify linguistic markers related depression such as sentiment and emotional tone and capture temporal dependencies in texts. Transformer models represent the next level of deep learning techniques, enhanced with self-attention mechanism that enables the automatic analysis of text sequences over time, sematic feature extraction and the interpretation of context-sensitive language. Multimodal approaches using these techniques integrate textual and visual data to improve the accuracy of depression detection. Despite notable advancements, still there are many challenges to address such as data availability, privacy and ethical and model interpretability. Primary aim of this paper is to explore concepts such as evolution of techniques, deep learning and NLP for depression detection, dataset for testing and research gaps and future directions. We conducted a Systematic Literature Review (SLR) on research and review studies published in various conferences and peer-reviewed journals. At last, we provided the brief summary of key findings.
- Conference Article
5
- 10.1117/12.2582009
- Feb 15, 2021
Tympanic membrane (TM) diseases are among the most frequent pathologies, affecting the majority of the pediatric population. Video otoscopy is an effective tool for diagnosing TM diseases. However, access to Ear, Nose, and Throat (ENT) physicians is limited in many sparsely-populated regions worldwide. Moreover, high inter- and intra-reader variability impair accurate diagnosis. This study proposes a digital otoscopy video summarization and automated diagnostic label assignment model that benefits from the synergy of deep learning and natural language processing (NLP). Our main motivation is to obtain the key visual features of TM diseases from their short descriptive reports. Our video database consisted of 173 otoscopy records from three different TM diseases. To generate composite images, we utilized our previously developed semantic segmentation-based stitching framework, SelectStitch. An ENT expert reviewed these composite images and wrote short reports describing the TM's visual landmarks and the disease for each ear. Based on NLP and a bag-of-words (BoW) model, we determined the five most frequent words characterizing each TM diagnostic category. A neighborhood components analysis was used to predict the diagnostic label of the test instance. The proposed model provided an overall F1-score of 90.2%. This is the first study to utilize textual information in computerized ear diagnostics to the best of our knowledge. Our model has the potential to become a telemedicine application that can automatically make a diagnosis of the TM by analyzing its visual descriptions provided by a healthcare provider from a mobile device.
- Conference Article
2
- 10.1109/picict53635.2021.00030
- Sep 1, 2021
Artificial intelligence (AI) tools significantly bolstered and facilitated the complexity of forecasting dangers, catching cancer earlier, and predicting the survival after treatment. Nowadays, they are emerging as a major adjunct for clinical and medical approaches. In this paper, the main idea concentrates on using deep learning (DL) and natural language processing (NLP) to scan, diagnose, and reduce any future negativities of cancer and oncology. In addition to giving strong variety of structural, systematic analysis of some reviewed studies. The primary aim of this survey is that it includes and represents a helpful guidance for researchers who need such direct evaluation of existed techniques for the purpose of more effective maintenance and development.
- Research Article
28
- 10.1016/j.dajour.2023.100301
- Aug 16, 2023
- Decision Analytics Journal
An integrated deep learning and natural language processing approach for continuous remote monitoring in digital health
- Conference Article
1
- 10.1109/isssc56467.2022.10051524
- Dec 15, 2022
In today’s digital age, businesses create tremendous data as part of their regular operations. On legacy or cloud platforms, this data is stored mainly in structured, semi-structured, and unstructured formats, and most of the data kept in the cloud are amorphous, containing sensitive information. With the evolution of AI, organizations are using deep learning and natural language processing to extract the meaning of these big data through unstructured data analysis and insights (UDAI). This study aims to investigate the influence of these unstructured big data analyses and insights on the organization’s decision-making system (DMS), financial sustainability, customer lifetime value (CLV), and organization’s long-term growth prospects while encouraging a culture of self-service analytics. This study uses a validated survey instrument to collect the responses from Fortune-500 organizations to find the adaptability and influence of UDAI in current data-driven decision making and how it impacts organizational DMS, financial sustainability and CLV.
- Research Article
- 10.1093/bjro/tzad009
- Dec 12, 2023
- BJR Open
ObjectivesThis diagnostic study assessed the accuracy of radiologists retrospectively, using the deep learning and natural language processing chest algorithms implemented in Clinical Review version 3.2 for: pneumothorax, rib fractures in digital chest X-ray radiographs (CXR); aortic aneurysm, pulmonary nodules, emphysema, and pulmonary embolism in CT images.MethodsThe study design was double-blind (artificial intelligence [AI] algorithms and humans), retrospective, non-interventional, and at a single NHS Trust. Adult patients (≥18 years old) scheduled for CXR and CT were invited to enroll as participants through an opt-out process. Reports and images were de-identified, processed retrospectively, and AI-flagged discrepant findings were assigned to two lead radiologists, each blinded to patient identifiers and original radiologist. The radiologist’s findings for each clinical condition were tallied as a verified discrepancy (true positive) or not (false positive).ResultsThe missed findings were: 0.02% rib fractures, 0.51% aortic aneurysm, 0.32% pulmonary nodules, 0.92% emphysema, and 0.28% pulmonary embolism. The positive predictive values (PPVs) were: pneumothorax (0%), rib fractures (5.6%), aortic dilatation (43.2%), pulmonary emphysema (46.0%), pulmonary embolus (11.5%), and pulmonary nodules (9.2%). The PPV for pneumothorax was nil owing to lack of available studies that were analysed for outpatient activity.ConclusionsThe number of missed findings was far less than generally predicted. The chest algorithms deployed retrospectively were a useful quality tool and AI augmented the radiologists’ workflow.Advances in knowledgeThe diagnostic accuracy of our radiologists generated missed findings of 0.02% for rib fractures CXR, 0.51% for aortic dilatation, 0.32% for pulmonary nodule, 0.92% for pulmonary emphysema, and 0.28% for pulmonary embolism for CT studies, all retrospectively evaluated with AI used as a quality tool to flag potential missed findings. It is important to account for prevalence of these chest conditions in clinical context and use appropriate clinical thresholds for decision-making, not relying solely on AI.
- Research Article
1
- 10.1016/j.eswa.2023.122911
- Dec 9, 2023
- Expert Systems with Applications
Enhancing local citation recommendation with recurrent highway networks and SciBERT-based embedding
- Conference Article
- 10.1109/icdcece53908.2022.9792819
- Apr 23, 2022
Thousands of people utilize public transit every day all across the world. People travel to new areas on a regular basis using public transportation, and they may feel entirely disoriented in a new environment at times. At this point, this chatbot is here to assist. A chat interface that communicates with humans. It is frequently referred to be one of the most promising technologies used for human-machine interaction. It’s a piece of software that employs deep learning methods and natural language processing (NLP) to conduct an online chat conversation via text/voice. In the form of a GUI, it offers direct communication with a conscious human agent. It is a software program that uses methods for deep learning and natural language processing (NLP) to conduct an online chat conversation via text/voice. In the form of a GUI, it offers direct communication with a conscious human agent. It analyses and extracts from the user’s inquiry the relevant database values. The cognitive Computing technique employed to these chatbots is in charge of effectively comprehending the user’s intents and avoiding any misunderstandings. The chatbots respond to the user’s query request with the most relevant response once the user’s intent is recognized. The user subsequently receives all of the info regarding the bus name as well as their figures, allowing them to comfortably go to their intended place. Proposed research makes use of much accessible Application Programming Interfaces (APIs), including the Dialog flow API for effective NLP combination with our TARS chatbot sensor.
- Research Article
- 10.47917/j.es.20190102
- Jan 1, 2020
- Emerging Science
A method for sentiment analysis of film reviews based on deep learning and natural language processing is disclosed. The method for analyzing emotions of film reviews by deep learning includes: getting film reviews text data and marking positive and negative emotions in film reviews; preprocessing the film reviews by removing redundant information; vectorizing film reviews text according to the bag-of-words model; splitting the vectorized film reviews into training sets and test sets; setting up the initial deep learning model of film reviews sentiment analysis, which connects and integrates four convolution neural network layers, two pooling layers, and two full connected layers; training the initial deep learning model by training data set to generate the final deep learning model, using the final deep learning model to detect the film reviews test set and output the detection results. The invention can accurately distinguish positive and negative emotions of film reviews, and the deep learning model has a simple structure and a small amount of calculation, thereby improving the speed of emotion analysis of film reviews. a raw review -Remove HTML, review text -n n-leters- letters only lowercase & split into individual ones Join a clean review 4-the words- meaningfulwords * Remove - words together stopwords Figure 1 [sentense] John likes to watch movies. Mary likes too. John also likes to watch football game. mokes 52 to:3 John likes to watchmovies.Marylikestoo. Johnalso likes to watch football game. [1, 1, 1, 1, 0, 1, 1, 1, 0, 0] Figure 2
- Research Article
41
- 10.1002/pmic.202100232
- Jan 1, 2022
- PROTEOMICS
In molecular biology and proteomics, accurately predicting functions of proteins is a very critical step. However, it sinks a lot of time and resources to detect the protein functions through biological experiments. Therefore, it is necessary to develop an accurate and reliable computational method for this prediction purpose. Since a growing number of deep learning and natural language processing (NLP) models have been developed recently, they hold potential to assist in protein function problems. Therefore, Wang etal. applied them to extract the hidden features of protein sequences and improve the performance of protein function prediction. As a case study, they used their approach to develop a web-server namely prPred-DRLF to predict plant resistance proteins, which play important roles in the detection of pathogen invasion. Cross-validation and independent test results indicate that prPred-DRLF outperformed current state-of-the-art prediction methods on the same datasets. The excellent performance then shows that deep representative learning (using deep learning and NLP) is an accurate and reliable method for protein function prediction. This article is protected by copyright. All rights reserved.
- Conference Article
6
- 10.1109/icriis53035.2021.9617079
- Oct 25, 2021
This document was inspired by how the usage of social media platforms in Malaysia such as Twitter have drastically increased ever since the recent Covid-19 pandemic. While practicing social distancing and other pandemic regulations was for the betterment and prevention of physical health, mental health of most was affected negatively. People generally revolve around with having interactions with other humans and once the physical form of it was cut, people tend to turn to social media. A twitter sentiment analysis approach was used to find the casual link between social media and mental health. This project aims to utilise the broaden scope of social media-based mental health measures since research proves the evidence of a link between depression and specific linguistic features as well. Therefore, the research entails on how the problem statement of this project on developing an algorithm that can predict text- based depression symptoms using deep learning and Natural Language Processing (NLP) can be achieved. The objective of the project is to identify depressive tweets using NLP and Deep Learning in the urban cities of Malaysia within the beginning of the Covid-19 period to enable individuals, their caregivers, parents, and even medical professionals to identify the linguistic clues that point towards to signs of mental health deterioration. Additionally, this paper also researches to make the proposed system to identify words that represent depression and categorize them accordingly as well as improve the accuracy of the system in identifying tweets that display the depression related words based on its specific location. This objective will be achieved following the methodology using the Deep Learning approach and Natural Language Processing technique. A recurrent neural network approach was implemented in this project known as the Long-Term Short Memory, which is a form of advanced RNN, that allows information to be preserved. Conducting an analysis on the linguistic indicators from tweets allows for a low-profile assessment that can supplement traditional services which then consequently would allow for a much earlier detection of depressive symptoms. Since this research entails on finding the link between tweets and machine learning's ability to detect depressive symptoms, the success this project brings forth a meaningful help towards those who are mentally affected but are unable to seek help or are unsure on diagnosing themselves as this project helps alert the government and psychologist on the need for it. The project thus far has an accuracy rate of 94%, along with, precision rate of 0.94, recall of 0.96 and an F1 score of 0.95.
- Book Chapter
- 10.1007/978-981-19-1610-6_50
- Jul 27, 2022
This literature review gives an up-to-date overview of studies aimed at analyzing the information contained in social media messages, which reflect malicious activity that threatens cyberspace. This work presented studies aimed at detecting and predicting cyberattacks with the intent of altering, controlling, manipulating, damaging, or affecting victims’ digital services, computing equipment, and communications equipment of the victims. The method used in this systematic literature review is based on the model proposed by Petersen et al. The conclusion from the studies showed that the use of machine learning algorithms, deep learning, and natural language processing tools contributes to better detection of threats in social media. For future research, it is necessary to continue the implementation of the most recent tools of machine learning and deep learning and natural language processing, to improve the effectiveness of the results. The findings of this systematic review will enable the researcher to develop methodologies and mechanisms that could help detect and prevent future cyberattacks.KeywordsSystematics literature reviewCyberattack detectionSocial media analysisTwitter posting analysis
- Research Article
- 10.1007/s10506-025-09488-0
- Oct 17, 2025
- Artificial Intelligence and Law
- Research Article
- 10.1007/s10506-025-09480-8
- Oct 14, 2025
- Artificial Intelligence and Law
- Research Article
- 10.1007/s10506-025-09479-1
- Oct 9, 2025
- Artificial Intelligence and Law
- Research Article
- 10.1007/s10506-025-09482-6
- Oct 6, 2025
- Artificial Intelligence and Law
- Research Article
- 10.1007/s10506-025-09483-5
- Oct 4, 2025
- Artificial Intelligence and Law
- Research Article
- 10.1007/s10506-025-09471-9
- Aug 26, 2025
- Artificial Intelligence and Law
- Research Article
- 10.1007/s10506-025-09466-6
- Aug 12, 2025
- Artificial Intelligence and Law
- Research Article
- 10.1007/s10506-025-09475-5
- Aug 7, 2025
- Artificial Intelligence and Law
- Research Article
- 10.1007/s10506-025-09476-4
- Jul 25, 2025
- Artificial Intelligence and Law
- Research Article
- 10.1007/s10506-025-09472-8
- Jul 23, 2025
- Artificial Intelligence and Law
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.