INTU-AI: Digitalization of Police Interrogation Supported by Artificial Intelligence

  • Abstract
  • Literature Map
  • References
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Traditional police interrogation processes remain largely time-consuming and reliant on substantial human effort for both analysis and documentation. Intuition Artificial Intelligence (INTU-AI) is a Windows application designed to digitalize the administrative workflow associated with police interrogations, while enhancing procedural efficiency through the integration of AI-driven emotion recognition models. The system employs a multimodal approach that captures and analyzes emotional states using three primary vectors: Facial Expression Recognition (FER), Speech Emotion Recognition (SER), and Text-based Emotion Analysis (TEA). This triangulated methodology aims to identify emotional inconsistencies and detect potential suppression or concealment of affective responses by interviewees. INTU-AI serves as a decision-support tool rather than a replacement for human judgment. By automating bureaucratic tasks, it allows investigators to focus on critical aspects of the interrogation process. The system was validated in practical training sessions with inspectors and with a 12-question questionnaire. The results indicate a strong acceptance of the system in terms of its usability, existing functionalities, practical utility of the program, user experience, and open-ended qualitative responses.

ReferencesShowing 10 of 20 papers
  • 10.2139/ssrn.5122595
Truth And Technology: Deepfakes in Law Enforcement Interrogations
  • Jan 1, 2025
  • SSRN Electronic Journal
  • Hillary B Farber + 1 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 339
  • 10.3389/fnsys.2014.00175
Cognitive reappraisal and expressive suppression strategies role in the emotion regulation: an overview on their modulatory effects and neural correlates
  • Sep 19, 2014
  • Frontiers in Systems Neuroscience
  • Debora Cutuli

  • 10.1109/rmkmate64874.2025.11042349
Advance Deception Detection using Multi-Modal Analysis
  • May 7, 2025
  • Krisha Patel + 2 more

  • Open Access Icon
  • PDF Download Icon
  • Cite Count Icon 9
  • 10.1177/09637214231173095
Lie Detection: What Works?
  • May 19, 2023
  • Current Directions in Psychological Science
  • Tim Brennen + 1 more

  • Open Access Icon
  • Cite Count Icon 18
  • 10.1016/j.actpsy.2020.103250
How humans impair automated deception detection performance
  • Jan 13, 2021
  • Acta Psychologica
  • Bennett Kleinberg + 1 more

  • Open Access Icon
  • Cite Count Icon 1
  • 10.1016/j.worlddev.2024.106834
The data revolution in justice
  • Nov 18, 2024
  • World Development
  • Manuel Ramos-Maqueda + 1 more

  • Cite Count Icon 7765
  • 10.1037/10001-000
The expression of the emotions in man and animals.
  • Jan 1, 1872
  • Charles Darwin

  • Open Access Icon
  • Cite Count Icon 1
  • 10.1109/access.2025.3533545
Advancements and Challenges in Video-Based Deception Detection: A Systematic Literature Review of Datasets, Modalities, and Methods
  • Jan 1, 2025
  • IEEE Access
  • Yeni Dwi Rahayu + 3 more

  • Cite Count Icon 30804
  • 10.2307/30036540
User Acceptance of Information Technology: Toward a Unified View
  • Jan 1, 2003
  • MIS Quarterly
  • Venkatesh + 2 more

  • 10.1109/iccubea61740.2024.10775193
"AffectAlchemy": An Affective Dataset Based on Plutchik’s Psychological Model for Text-Based Emotion Recognition and its Analysis Using ML Techniques
  • Aug 23, 2024
  • Ajay Kapase + 5 more

Similar Papers
  • Conference Article
  • 10.1109/tencon50793.2020.9293820
A Smart Space with Music Selection Feature Based on Face and Speech Emotion and Expression Recognition
  • Nov 16, 2020
  • Jose Martin Z Maningo + 7 more

The technological capabilities of computers in today's time continues to improve in ways that seemed impossible before. It is common knowledge that most people use computers to make everyday lives easier. Therefore, it is vital to bridge the gap between humans and computers to provide more suitable aid to the user. One way to do this is to use emotion recognition as a tool to make the computer understand and analyze how it can help its user on a much deeper level. This paper proposes a way to use both face and speech emotion recognition as a basis for selecting an appropriate music that can improve or relieve one's emotion or stress. To accomplish this, Support Vector Machine with different kernels are used to create the models for validation and testing on both the face and speech emotion recognition. The final integrated system yielded an accuracy rate of 78.5%.

  • Research Article
  • Cite Count Icon 9
  • 10.1080/20008066.2023.2214388
Adults with a history of childhood maltreatment with and without mental disorders show alterations in the recognition of facial expressions
  • Jun 15, 2023
  • European Journal of Psychotraumatology
  • Lara-Lynn Hautle + 7 more

Background: Individuals with child maltreatment (CM) experiences show alterations in emotion recognition (ER). However, previous research has mainly focused on populations with specific mental disorders, which makes it unclear whether alterations in the recognition of facial expressions are related to CM, to the presence of mental disorders or to the combination of CM and mental disorders, and on ER of emotional, rather than neutral facial expressions. Moreover, commonly, recognition of static stimulus material was researched. Objective: We assessed recognition of dynamic (closer to real life) negative, positive and neutral facial expressions in individuals characterised by CM, rather than a specific mental disorder. Moreover, we assessed whether they show a negativity bias for neutral facial expressions and whether the presence of one or more mental disorders affects recognition. Methods: Ninety-eight adults with CM experiences (CM+) and 60 non-maltreated (CM−) adult controls watched 200 non-manipulated coloured video sequences, showing 20 neutral and 180 emotional facial expressions, and indicated whether they interpreted each expression as neutral or as one of eight emotions. Results: The CM+ showed significantly lower scores in the recognition of positive, negative and neutral facial expressions than the CM− group (p < .050). Furthermore, the CM+ group showed a negativity bias for neutral facial expressions (p < .001). When accounting for mental disorders, significant effects stayed consistent, except for the recognition of positive facial expressions: individuals from the CM+ group with but not without mental disorder scored lower than controls without mental disorder. Conclusions: CM might have long-lasting influences on the ER abilities of those affected. Future research should explore possible effects of ER alterations on everyday life, including implications of the negativity bias for neutral facial expressions on emotional wellbeing and relationship satisfaction, providing a basis for interventions that improve social functioning.

  • Conference Article
  • Cite Count Icon 1
  • 10.54941/ahfe1001973
An interactive design solution for prenatal emotional nursing of pregnant women
  • Jan 1, 2022
  • Leyi Wu + 2 more

With the continuous development of interactive technology, informatization has begun to integrate into people's life[1].Having been neglected in history, postpartum depression reminds us that we need to pay attention to maternal emotional needs and prenatal care[2]. In the current situation, it is worth researching the interactive products for prenatal emotional care. According to the survey, it is not difficult to find that some speech emotion and facial expression recognition technologies in artificial intelligence are developing Which have large potential for extensive use.[3,4]. Therefore, it is necessary and feasible to design prenatal emotional diagnosis tools for pregnant women. This study has designed a product to care for pregnant women by identifying their emotional needs through AI recognition technologies. Appropriate prenatal intervention is conducive to the prevention of postpartum depression[5,6] . The use of artificial intelligence recognition technology can provide an appropriate emotional care plan. This can reduce the difficulty of training medical personnel and the difficulty of relatives caring for pregnant women. Therefore, the risk of postpartum depression can be reduced. QUESTIONCollecting opinions and information from previous studies is an important reference for this study. Therefore, this study needs to solve the following problems.1) How to design an artificial intelligence product that can accurately diagnose the emotion of pregnant women?2) How to integrate AI facial emotion recognition technology?3) How to help nurses and their families take care of users more professionally and easily through the information database?4) How to adapt the emotional care program provided by interactive products to different pregnant women? Methods:the research methods of this study are as follows:1) Observing the working process of artificial midwives and psychologists to find Which part can be assisted by machines[7].2) To understand the emotional needs of pregnant women through interview.3) To brainstorm according to the real data collected before and research findings, and then design interactive products that can practically solve the emotional care problems of pregnant women.4) Through the experiment of AI emotion recognition technologies, the feasibility of emotion recognition is verified. CONCLUSIONS:With the continuous development of artificial intelligence, more and more artificial intelligence products have entered our life [1]. This study is aimed to help pregnant women prevent prenatal and postpartum depression and maintain their health through artificial intelligence interaction technologies. This study is exploring the solution under the help of artificial intelligence after studying the problem that prenatal and postpartum emotion are neglected. This design is still in the conceptual design stage, but it seems only a matter of time before this design is applied in the future[8]. REFERENCES:[1]. Lee H S , Lee J . Applying Artificial Intelligence in Physical Education and Future Perspectives. 2021.[2]. Beck C T . Postpartum depression: it isn't just the blues.[J]. American Journal of Nursing, 2006, 106(5):40-50.[3].Ramakrishnan S , Emary I M M E . Speech emotion recognition approaches in human computer interaction[J]. Telecommunication Systems, 2013, 52(3):OnLine-First.[4]. Samara A , Galway L , Bond R , et al. Affective state detection via facial expression analysis within a human–computer interaction context[J]. Journal of Ambient Intelligence &amp; Humanized Computing, 2017.[5]. Clatworthy J . The effectiveness of antenatal interventions to prevent postnatal depression in high-risk women[J]. Journal of Affective Disorders, 2012, 137(1-3):25-34.[6]. Ju C H , Hye K J , Jae L J . Antenatal Cognitive-behavioral Therapy for Prevention of Postpartum Depression: A Pilot Study[J]. Yonsei Medical Journal, 2008, 49(4):553-.[7]. Fletcher A , Murphy M , Leahy-Warren P . Midwives' experiences of caring for women's emotional and mental well-being during pregnancy[J]. Journal of Clinical Nursing, 2021.[8]. Jin X , Liu C , Xu T , et al. Artificial intelligence biosensors: Challenges and prospects[J]. Biosensors &amp; Bioelectronics, 2020, 165:112412.

  • Book Chapter
  • Cite Count Icon 2
  • 10.4018/979-8-3693-4143-8.ch007
Advancements in Facial Expression Recognition Using Machine and Deep Learning Techniques
  • May 14, 2024
  • Shivani Singh + 3 more

In the field of computer vision, facial expression recognition is an emerging field that looks at visual face data to try and understand human emotions. Facial expression detection and recognition has been popular recently in the research field. The literature is compiled from several credible studies that have been released in the last 10 years. In the recent years, the artificial intelligence has evolved a lot along with which there has been rise in experimenting with various methodologies for facial expression recognition, which has given promising results in accurately identifying and recognizing facial emotions from input modalities like images, text, facial expressions, and physiological signals. However, accurate analysis of basic emotions like anger, happiness, sadness, and fear remains a challenge. This chapter provides valuable insights for researchers interested in advancing facial emotion recognition using machine learning and deep learning techniques.

  • Book Chapter
  • Cite Count Icon 2
  • 10.1007/978-3-030-68007-7_7
Recognition and Visualization of Facial Expression and Emotion in Healthcare
  • Jan 1, 2021
  • Hayette Hadjar + 5 more

To make the SenseCare KM-EP system more useful and smart, we integrated emotion recognition from facial expression. People with dementia have capricious feelings; the target of this paper is measuring and predicting these facial expressions. Analysis of data from emotional monitoring of dementia patients at home or during medical treatment will help healthcare professionals to judge the behavior of people with dementia in an improved and more informed way. In relation to the research project, SenseCare, this paper describes methods of video analysis focusing on facial expression and visualization of emotions, in order to implement an “Emotional Monitoring” web tool, which facilitates recognition and visualization of facial expression, in order to raise the quality of therapy. In this study, we detail the conceptual design of each process of the proposed system, and we describe our methods chosen for the implementation of the prototype using face-api.js and tensorflow.js for detection and recognition of facial expression and the PAD space model for 3D visualization of emotions.

  • Research Article
  • Cite Count Icon 126
  • 10.33969/ais.2020.21005
Emotion Recognition and Detection Methods: A Comprehensive Survey
  • Jan 1, 2020
  • Journal of Artificial Intelligence and Systems
  • Anvita Saxena + 2 more

Human emotion recognition through artificial intelligence is one of the most popular research fields among researchers nowadays. The fields of Human Computer Interaction (HCI) and Affective Computing are being extensively used to sense human emotions. Humans generally use a lot of indirect and non-verbal means to convey their emotions. The presented exposition aims to provide an overall overview with the analysis of all the noteworthy emotion detection methods at a single location. To the best of our knowledge, this is the first attempt to outline all the emotion recognition models developed in the last decade. The paper is comprehended by expending more than hundred papers; a detailed analysis of the methodologies along with the datasets is carried out in the paper. The study revealed that emotion detection is predominantly carried out through four major methods, namely, facial expression recognition, physiological signals recognition, speech signals variation and text semantics on standard databases such as JAFFE, CK+, Berlin Emotional Database, SAVEE, etc. as well as self-generated databases. Generally seven basic emotions are recognized through these methods. Further, we have compared different methods employed for emotion detection in humans. The best results were obtained by using Stationary Wavelet Transform for Facial Emotion Recognition , Particle Swarm Optimization assisted Biogeography based optimization algorithms for emotion recognition through speech, Statistical features coupled with different methods for physiological signals, Rough set theory coupled with SVM for text semantics with respective accuracies of 98.83%,99.47%, 87.15%,87.02% . Overall, the method of Particle Swarm Optimization assisted Biogeography based optimization algorithms with an accuracy of 99.47% on BES dataset gave the best results.

  • Research Article
  • Cite Count Icon 12
  • 10.1016/j.iswa.2024.200351
In-depth investigation of speech emotion recognition studies from past to present –The importance of emotion recognition from speech signal for AI–
  • Mar 11, 2024
  • Intelligent Systems with Applications
  • Yeşim Ülgen Sönmez + 1 more

In-depth investigation of speech emotion recognition studies from past to present –The importance of emotion recognition from speech signal for AI–

  • Research Article
  • 10.1051/itmconf/20257003012
The Application and Analysis of Emotion Recognition Based on Modern Technology
  • Jan 1, 2025
  • ITM Web of Conferences
  • Lanxin Bi

This article provides a comprehensive analysis of various emotion recognition methods, focusing on speech emotion recognition, facial expression recognition, and physiological signal emotion recognition. The primary aim is to evaluate the advantages and disadvantages of these methods, offering insights into selecting the most appropriate approach for different application scenarios. The study involves collecting and analysing experimental data, exploring their respective strengths and limitations, and proposing potential solutions to enhance their effectiveness. Speech emotion recognition is effective but sensitive to noise and speaker variability, while facial expression recognition excels under controlled conditions but struggles with changes in lighting and angles. Physiological signal recognition offers deep insights into internal emotional states but requires complex signal processing and is vulnerable to external interferences. Despite the growing application of emotion recognition technology across various fields, including healthcare, traffic safety, and security, there remain significant challenges related to accuracy, robustness, and privacy. This study highlights the need for continued research to improve these technologies, particularly in enhancing their robustness and adaptability. The findings provide valuable guidance for researchers and practitioners seeking to optimize emotion recognition systems for diverse real-world applications.

  • Research Article
  • 10.55549/jeseh.813
The Heart and Art of Robotics: From AI to Artificial Emotional Intelligence in STEM Education
  • Mar 26, 2025
  • Journal of Education in Science, Environment and Health
  • Christopher Dignam + 2 more

The evolution of artificial intelligence (AI) and robotics in education has transitioned from automation toward emotionally responsive learning systems through artificial emotional intelligence (AEI). While AI-driven robotics has enhanced instructional automation, AEI introduces an affective dimension by recognizing and responding to human emotions. This study examines the role of AEI-powered robotics in fostering student engagement, cognitive development, and social-emotional learning (SEL) across early childhood, K-12, and higher education. Constructivist and experiential learning theories provide a foundation for integrating emotionally intelligent robotics into interdisciplinary and transdisciplinary STEAM education. Findings indicate that AEI enhances motivation, problem-solving, and collaboration by creating adaptive learning environments that respond to student affective states. However, challenges such as data privacy, inaccuracies in emotion recognition, and access to robotics must be addressed to ensure ethical implementation. The study advocates for further interdisciplinary research, professional growth, and infrastructure investment to optimize AEI-powered robotics in education. The study also emphasizes prioritizing emotionally intelligent interactions for AEI-driven robotics that represents a shift toward human-centered, AI applications for supporting personalized learning and holistic student development. Future directions include refining affective computing models and fostering ethical AI and AEI frameworks to ensure responsible and effective implementation in early childhood through higher educational settings.

  • Research Article
  • Cite Count Icon 37
  • 10.1007/s00521-013-1377-z
Robust emotion recognition in noisy speech via sparse representation
  • Mar 29, 2013
  • Neural Computing and Applications
  • Xiaoming Zhao + 2 more

Emotion recognition in speech signals is currently a very active research topic and has attracted much attention within the engineering application area. This paper presents a new approach of robust emotion recognition in speech signals in noisy environment. By using a weighted sparse representation model based on the maximum likelihood estimation, an enhanced sparse representation classifier is proposed for robust emotion recognition in noisy speech. The effectiveness and robustness of the proposed method is investigated on clean and noisy emotional speech. The proposed method is compared with six typical classifiers, including linear discriminant classifier, K-nearest neighbor, C4.5 decision tree, radial basis function neural networks, support vector machines as well as sparse representation classifier. Experimental results on two publicly available emotional speech databases, that is, the Berlin database and the Polish database, demonstrate the promising performance of the proposed method on the task of robust emotion recognition in noisy speech, outperforming the other used methods.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/isdfs55398.2022.9800837
The Necessity of Emotion Recognition from Speech Signals for Natural and Effective Human-Robot Interaction in Society 5.0
  • Jun 6, 2022
  • Yeşim Ülgen Sönmez + 1 more

The history of humanity has reached Industry 4.0 that aims to the integration of information technologies and especially artificial intelligence with all life-sustaining mechanisms in the 21st century, and consecutively, the transformation of Society 5.0 has begun. Society 5.0 means a smart society in which humans share life with physical robots and software robots as well as smart devices based on augmented reality. Industry 4.0 contains main structures such as the internet of things, big data analytics, digital transformation, cyber-physical systems, artificial intelligence, and business processes optimization. It is impossible to consider the machines to be without emotions and emotional intelligence within the transformation of smart tools and artificial intelligence, in addition, while it is planned to give most of the commands with voice and speaking, it became more important to develop algorithms that can detect emotions. In the smart society, new and rapid methods are needed for speech recognition, emotion recognition, and speech emotion recognition areas to maximize human-computer (HCI) or human-robot interaction (HRI) and collaboration. In this study, speech recognition and speech emotion recognition studies in robot technology are investigated and developments are revealed.

  • Book Chapter
  • 10.1007/978-3-319-24033-6_11
Time Dependent ARMA for Automatic Recognition of Fear-Type Emotions in Speech
  • Jan 1, 2015
  • J C Vásquez-Correa + 5 more

The speech signals are non-stationary processes with changes in time and frequency. The structure of a speech signal is also affected by the presence of several paralinguistics phenomena such as emotions, pathologies, cognitive impairments, among others. Non-stationarity can be modeled using several parametric techniques. A novel approach based on time dependent auto-regressive moving average TARMA is proposed here to model the non-stationarity of speech signals. The model is tested in the recognition of fear-typeo emotions in speech. The proposed approach is applied to model syllables and unvoiced segments extracted from recordings of the Berlin and enterface05 databases. The results indicate that TARMA models can be used for the automatic recognition of emotions in speech.

  • Conference Article
  • Cite Count Icon 14
  • 10.1109/tencon.2015.7372840
Progress in speech emotion recognition
  • Nov 1, 2015
  • Xueying Zhang + 2 more

Emotional information in speech signal is an important information resource. When verbal expression combined with human emotion, emotional speech processing is no longer a simple mathematical model or pure calculations. Fluctuations of the mood are controlled by the brain perception; speech signal processing based on cognitive psychology can capture emotion better. In this paper the relevance analysis between speech emotion and human cognition is introduced firstly. The recent progress in speech emotion recognition was summarized including the review of speech emotion databases, feature extraction and emotion recognition networks. Secondly a fuzzy cognitive map network based on cognitive psychology is introduced into emotional speech recognition. In addition, the mechanism of the human brain for cognitive emotional speech is explored. To improve the recognition accuracy, this report also tries to integrate event-related potentials to speech emotion recognition. This idea is the conception and prospect of speech emotion recognition mashed up with cognitive psychology in the future.

  • Research Article
  • 10.51903/ijgd.v3i1.2811
From Static To Sentient: Designing Emotionally Responsive Interfaces Using Affective Computing For UX Enhancement
  • May 30, 2025
  • International Journal of Graphic Design
  • Dedy Prasetya + 2 more

This study explores the integration of artificial intelligence (AI), particularly generative and affective computing, into user experience (UX) and creative industry workflows. It investigates how recent advancements in multimodal AI, user interface (UI) design, and emotion recognition can enhance personalization, user satisfaction, and design efficiency. Drawing from cross-disciplinary literature, the paper highlights the transformative potential of tools such as DALL·E, Midjourney, and Adobe Firefly in supporting ideation and prototyping, while also addressing concerns about emotional authenticity, ethical transparency, and cultural sensitivity. Findings suggest that AI-driven UX innovations must be grounded in human-centered design to retain user agency and trust, especially in emotionally sensitive contexts. The study emphasizes the role of affective computing in enabling adaptive digital environments through real-time emotion recognition. However, limitations related to the generalizability of findings, lack of empirical testing, and rapid technological evolution are acknowledged. Future research directions include empirical validation of AI-UX frameworks, cross-cultural testing, and interdisciplinary collaboration to ensure ethical, inclusive, and emotionally intelligent design systems. Overall, the study contributes to a growing discourse on the responsible integration of AI in UX, proposing that technology should act as a co-creative partner rather than a replacement for human creativity and empathy.

  • Research Article
  • 10.31893/multirev.2025328
Real-time emotion recognition based on facial expressions using Artificial Intelligence techniques: A review and future directions
  • Apr 5, 2025
  • Multidisciplinary Reviews
  • Cheng Qian + 2 more

In recent years, the real-time facial expression recognition system based on artificial intelligence technology has garnered significant attention from academia and industry. This paper presents a systematic literature review and bibliometric analysis to examine the latest publications in this field, summarizing the development and research significance of facial expression recognition technology and emphasizing its vital role in human-computer interaction and affective computing. The study used PRISMA to review 386 articles published from January 2019 to December 2023 in Web of Science, Scopus, IEEE Xplore, and ACM Digital Library. It encompasses covering various research methodologies, datasets, and application areas, as well as artificial intelligence technology, algorithms, and models. This review highlights advancements in Facial Expression Recognition, particularly the predominant use of databases such as FER2013 and CK+ while identifying Convolutional Neural Networks as the primary technique for real-time emotion classification. A quantitative analysis of research trends over the past five years indicates a shift toward keywords like transfer learning and applications in domains such as healthcare and the Internet of Things. Contemporary deep learning models, including CNNs, ResNet, and VGG, demonstrate impressive accuracy in classifying seven basic emotions, facilitating real-time applications across multiple fields. However, challenges such as overfitting, sensitivity to environmental factors, and the necessity for high-performance computing resources impede the broader deployment of these systems. These findings underscore the urgent need for further research to address these limitations and enhance the ethical application of FER technologies. Finally, based on the review and analysis results, this paper outlines future research directions for this technology, including multimodal information fusion, computational modelling, personalized emotion recognition, and interdisciplinary cooperation, thereby providing valuable references and inspiration for future works.

More from: Applied Sciences
  • New
  • Research Article
  • 10.3390/app152111838
Research on Target Detection and Counting Algorithms for Swarming Termites in Agricultural and Forestry Disaster Early Warning
  • Nov 6, 2025
  • Applied Sciences
  • Hechuang Wang + 2 more

  • New
  • Research Article
  • 10.3390/app152111830
Hybrid Coatings of Chitosan-Tetracycline-Oxide Layer on Anodized Ti-13Zr-13Nb Alloy as New Drug Delivery System
  • Nov 6, 2025
  • Applied Sciences
  • Aizada Utenaliyeva + 6 more

  • New
  • Research Article
  • 10.3390/app152111846
SP-Transformer: A Medium- and Long-Term Photovoltaic Power Forecasting Model Integrating Multi-Source Spatiotemporal Features
  • Nov 6, 2025
  • Applied Sciences
  • Bin Wang + 5 more

  • New
  • Research Article
  • 10.3390/app152111831
Non-Steady-State Coupled Model of Viscosity–Temperature–Pressure in Polymer Flooding Injection Wellbores
  • Nov 6, 2025
  • Applied Sciences
  • Yutian Huang + 5 more

  • New
  • Research Article
  • 10.3390/app152111823
Enhancing Biscuit Nutritional Value Through Apple and Sour Cherry Pomace Fortification
  • Nov 6, 2025
  • Applied Sciences
  • Maria Bianca Mandache + 3 more

  • New
  • Research Article
  • 10.3390/app152111833
Electric Field and Charge Characteristics at the Gas–Solid Interface of a Scaled HVDC Wall Bushing Model
  • Nov 6, 2025
  • Applied Sciences
  • Wenhao Lu + 7 more

  • New
  • Research Article
  • 10.3390/app152111845
Detecting Audio Copy-Move Forgeries on Mel Spectrograms via Hybrid Keypoint Features
  • Nov 6, 2025
  • Applied Sciences
  • Ezgi Ozgen + 1 more

  • New
  • Research Article
  • 10.3390/app152111840
A-BiYOLOv9: An Attention-Guided YOLOv9 Model for Infrared-Based Wind Turbine Inspection
  • Nov 6, 2025
  • Applied Sciences
  • Sami Ekici + 2 more

  • New
  • Research Article
  • 10.3390/app152111825
CFD Analysis of Natural Convection Performance of a MMRTG Model Under Martian Atmospheric Conditions
  • Nov 6, 2025
  • Applied Sciences
  • Rafael Bardera-Mora + 4 more

  • New
  • Research Article
  • 10.3390/app152111820
Internal and External Loads in U16 Women’s Basketball Players Participating in U18 Training Sessions: A Case Study
  • Nov 6, 2025
  • Applied Sciences
  • Álvaro Bustamante-Sánchez + 3 more

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon