• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Paper
Search Paper
Cancel
Ask R Discovery
Explore

Feature

  • menu top paper My Feed
  • library Library
  • translate papers linkAsk R Discovery
  • chat pdf header iconChat PDF
  • audio papers link Audio Papers
  • translate papers link Paper Translation
  • chrome extension Chrome Extension

Content Type

  • preprints Preprints
  • conference papers Conference Papers
  • journal articles Journal Articles

More

  • resources areas Research Areas
  • topics Topics
  • resources Resources
git a planGift a Plan

Real-time Detection Research Articles

  • Share Topic
  • Share on Facebook
  • Share on Twitter
  • Share on Mail
  • Share on SimilarCopy to clipboard
Follow Topic R Discovery
By following a topic, you will receive articles in your feed and get email alerts on round-ups.
Overview
19501 Articles

Published in last 50 years

Related Topics

  • Detection Applications
  • Detection Applications
  • On-line Detection
  • On-line Detection
  • Reliable Detection
  • Reliable Detection
  • Detection Technology
  • Detection Technology
  • Detection Method
  • Detection Method

Articles published on Real-time Detection

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
18872 Search results
Sort by
Recency
Eye contact based engagement prediction for efficient human–robot interaction

This paper introduces a new approach to predict human engagement in human–robot interactions (HRI), focusing on eye contact and distance information. Recognising engagement, particularly its decline, is essential for successful and natural interactions. This requires early, real-time user behavior detection. Previous HRI engagement classification approaches use various audiovisual features or adopt end-to-end methods. However, both approaches face challenges: the former risks error accumulation, while the latter suffer from small datasets. The proposed class-sensitive model for capturing engagement in HRI is based on eye contact detection. By analyzing eye contact intensity over time, the model provides a more robust and reliable measure of engagement levels, effectively capturing both temporal dynamics and subtle behavioral changes. Direct eye contact detection, a crucial social signal in human interactions that has not yet been explored as a standalone indicator in HRI, offers a significant advantage in robustness over gaze detection and incorporates additional facial features into the assessment. This approach reduces the number of features from up to over 100 to just two, enabling real-time processing and surpassing state-of-the-art results with 80.73% accuracy and 80.68% F1-Score on the UE-HRI dataset, the primary resource in current engagement detection research. Additionally, cross-dataset testing on a newly recorded dataset with the Tiago robot from Pal Robotics achieved an accuracy of 86.8% and an F1-score of 87.9%. The model employs a sliding window approach and consists of just three fully connected layers for feature fusion and classification, offering a minimalistic yet effective architecture. The study reveals that engagement, traditionally relying on extensive feature sets, can be inferred reliably from temporal eye contact dynamics. The results include a detailed analysis of established engagement levels on the UE-HRI dataset using the proposed model. Additionally, models for more nuanced engagement classification are introduced, showcasing the effectiveness of this minimalistic feature set. These models provide a robust foundation for future research, advancing robotic systems and deepening understanding of HRI, for example by improving real-time social cue detection and creating adaptive engagement strategies in HRI.

Read full abstract
  • Journal IconComplex & Intelligent Systems
  • Publication Date IconMay 12, 2025
  • Author Icon Magnus Jung + 6
Just Published Icon Just Published
Cite IconCite
Save

“AgriSphere: An AI-Integrated Agricultural Marketplace Supporting Crop Diagnosis and Rural Economic Empowerment”

Abstract - Agriculture remains the rural livelihoods backbone, particularly in developing countries. Yet, farmers frequently encounter resilient issues like unforeseen pest epidemics, poor access to trustworthy markets, and inadequate timely intervention for decision-making in farming. AgriSphere, as a one-stop, AI-based agricultural platform introduced here, aims to overcome such practical issues. The platform consolidates a number of smart features such as real-time crop disease detection using deep learning, predictive crop growth analytics, and a special digital marketplace that facilitates direct-to-consumer and business-to-business transactions. It also provides weather forecasts and an AI chatbot to facilitate well-informed decision-making in local languages. Developed with cloud services, scalable machine learning models, and accessible web technologies, AgriSphere is made to be accessible and flexible. Beyond being a technical solution, it seeks to improve farm productivity, reinforce disease management, and support rural development by bridging the technology gap. By bringing together innovation and on-the-ground requirements, AgriSphere encourages sustainable agriculture, improves food security, and supports the larger cause of digital inclusion in agriculture. Keywords: AI in Agriculture, Crop Disease Diagnosis, Agri-Marketplace, Rural Empowerment, Deep Learning, Smart Farming, Sustainable Agriculture, Food Security, Agricultural Platform, Agri News Dashboard, Firebase Authentication, EfficientNetB0, Smart Crop Recommendation System, Machine Learning , User-Centric Agri Solutions, Agricultural Decision Support System (DSS), Cloud-based Agriculture Monitoring

Read full abstract
  • Journal IconINTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Publication Date IconMay 12, 2025
  • Author Icon Mr Ramesh T
Just Published Icon Just Published
Cite IconCite
Save

Real-time fall detection algorithm based on FFD-AlphaPose and CTR–GCN

Real-time fall detection algorithm based on FFD-AlphaPose and CTR–GCN

Read full abstract
  • Journal IconJournal of Real-Time Image Processing
  • Publication Date IconMay 12, 2025
  • Author Icon Xuecun Yang + 5
Just Published Icon Just Published
Cite IconCite
Save

Dark web Guardian: Real time threat Detection and Analysis

Abstract - The dark web represents a significant security threat due to its anonymity and the prevalence of illegal activities, including cybercrime, data breaches, and the sale of illicit goods. In response, real-time threat detection and analysis have become critical components of cybersecurity strategies. This paper introduces "Dark Web Guardian," a system designed to monitor and identify threats in real-time by analyzing dark web activities. The study focuses on the integration of advanced threat detection techniques, such as machine learning algorithms, behavioural analysis, and automated monitoring systems to track emerging risks. It also discusses the importance of real-time data analysis to prevent potential breaches before they escalate. Furthermore, the paper examines the role of collaboration between cybersecurity professionals, law enforcement, and private sector organizations in strengthening defenses against dark web-based threats. By leveraging innovative detection tools, "Dark Web Guardian" aims to provide proactive and dynamic protection against the evolving dangers lurking on the dark web. Keywords: Darkweb, Illicit activities, Data breaches, Real-time threat detection, Risk prevention, Cybercrime, Automated monitoring, Emerging threats, Dynamic protection.

Read full abstract
  • Journal IconInternational Scientific Journal of Engineering and Management
  • Publication Date IconMay 11, 2025
  • Author Icon Mrs.M P Nisha
Just Published Icon Just Published
Cite IconCite
Save

CNN Model for Smart Agriculture

Abstract— Precision farming is being revolutionized by the integration of innovative machine learning and computer vision methods. Identifying and classifying weeds and crops accurately remains a major challenge in this field, which has a direct effect on optimizing the yield as well as sustainability. In this work, an approach to smart weed detection based on deep learning using Convolutional Neural Networks (CNN) for feature learning followed by comparison of classifiers to select the best-performing model is introduced. In our research, InceptionV3 was utilized to extract features, and four classifiers—SoftMax, Support Vector Machine (SVM), Decision Tree (DT), and Random Forest (RF)— were compared. Among them, the Random Forest classifier performed better than others with a validation accuracy of 99.57% and an F1 score of 0.99. Extending the successful application of crop-weed detection, the model was transferred to a new application: forest fire detection. Employing the same CNN- based feature extraction pipeline and Random Forest classification, our system showed high accuracy on a forest fire dataset. In addition, we implemented a real-time detection system using webcam feeds with a processing speed of around 30 frames per second, making practical deployment for environmental monitoring possible. This study not only confirms the efficacy of the union of CNNs and ensemble learning but also exemplifies the versatility of the model architecture in both agricultural and environmental contexts. Index Terms— Convolutional Neural Networks (CNN), InceptionV3, Random Forest, Weed Detection, Crop Classification, Forest Fire Detection, Real-Time Image Processing, Machine Learning, Deep Learning, Precision Agriculture, Environmental Monitoring, Feature Extraction, Webcam Detection.

Read full abstract
  • Journal IconINTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Publication Date IconMay 11, 2025
  • Author Icon Prof Shivarudraiah
Just Published Icon Just Published
Cite IconCite
Save

Implementation of a Novel Gesture Recognition Technique for Real-Time Exercise Motion Detection

Human gesture and motion recognition systems have garnered major attention in recent years owing to their potential applications in fitness tracking, rehabilitation, and sports performance analysis. This research presents the implementation of a novel technique for the real-time detection and recognition of human gestures, specifically focusing on lower-body exercises such as squats. The proposed method leverages deep learning models combined with computer vision and motion capture technologies to accurately distinguish between correct and incorrect exercise forms. The system is trained using a dataset comprising annotated video recordings of various squat exercises, with key body landmarks extracted to track joint movements and detect posture anomalies. The core of the proposed technique involves a machine learning-based classification model that analyses the temporal and spatial features of human movement, providing corrective feedback to users. Gauge the model's performance utilising standard metrics like accuracy, and precision, and recall, along with F1-score, achieving an impressive accuracy rate of 97%, with high precision (95%), and recall (96%), and F1-score (95.5%). Moreover, a confusion matrix along with classification report are generated to gauge the model's effectiveness in distinguishing between correct and incorrect squat forms. This research adds to human motion detection by offering a robust, accurate, and scalable solution for real-time exercise correction, with potential applications in both fitness and rehabilitation domains.

Read full abstract
  • Journal IconInternational Journal of Computational and Experimental Science and Engineering
  • Publication Date IconMay 11, 2025
  • Author Icon Anju Gupta + 2
Open Access Icon Open AccessJust Published Icon Just Published
Cite IconCite
Save

Real-Time Detection and Localization of Force on a Capacitive Elastomeric Sensor Array Using Image Processing and Machine Learning

Soft and flexible capacitive tactile sensors are vital in prosthetics, wearable health monitoring, and soft robotics applications. However, achieving accurate real-time force detection and spatial localization remains a significant challenge, especially in dynamic, non-rigid environments like prosthetic liners. This study presents a real-time force point detection and tracking system using a custom-fabricated soft elastomeric capacitive sensor array in conjunction with image processing and machine learning techniques. The system integrates Otsu’s thresholding, Connected Component Labeling, and a tailored clustertracking algorithm for anomaly detection, enabling real-time localization within 1 ms. A 6×6 Dragon Skin-based sensor array was fabricated, embedded with copper yarn electrodes, and evaluated using a UR3e robotic arm and a Schunk force-torque sensor to generate controlled stimuli. The fabricated tactile sensor measures the applied force from 1 to 3N. Sensor output was captured via a MUCA breakout board and Arduino Nano 33 IoT, transmitting the Ratio of Mutual Capacitance data for further analysis. A Python-based processing pipeline filters and visualizes the data with real-time clustering and adaptive thresholding. Machine learning models such as linear regression, Support Vector Machine, decision tree, and Gaussian Process Regression were evaluated to correlate force with capacitance values. Decision Tree Regression achieved the highest performance (R2 = 0.9996, RMSE = 0.0446), providing an effective correlation factor of 51.76 for force estimation. The system offers robust performance in complex interactions and a scalable solution for soft robotics and prosthetic force mapping, supporting health monitoring, safe automation, and medical diagnostics.

Read full abstract
  • Journal IconSensors
  • Publication Date IconMay 10, 2025
  • Author Icon Peter Werner Egger + 2
Just Published Icon Just Published
Cite IconCite
Save

Real-Time Detection and Quantification of Rail Surface Cracks Using Surface Acoustic Waves and Piezoelectric Patch Transducers

This paper presents a novel wayside rail monitoring system for real-time detection and quantification of rail surface cracks with sub-millimeter precision. The core innovation lies in mounting piezoelectric transducers on the web of the rail—an unconventional and practical location that avoids interference with wheel passages while enabling continuous monitoring in real-world conditions. Moreover, to directly quantify crack depth, a customized signal processing pipeline is developed, employing surface acoustic waves (SAWs) and incorporating a parallel reference transducer pair mounted on an undamaged rail section for calibration. This auxiliary pair provides a real-time calibration baseline, improving measurement robustness and accuracy. The method is experimentally validated on rail samples and verified through metallographic analysis. This approach enables condition-based maintenance by improving detection accuracy and offers the potential to reduce operational costs and enhance railway safety.

Read full abstract
  • Journal IconSensors
  • Publication Date IconMay 10, 2025
  • Author Icon Mohsen Rezaei + 8
Just Published Icon Just Published
Cite IconCite
Save

Real-Time Detection and Instance Segmentation Models for the Growth Stages of Pleurotus pulmonarius for Environmental Control in Mushroom Houses

Real-Time Detection and Instance Segmentation Models for the Growth Stages of Pleurotus pulmonarius for Environmental Control in Mushroom Houses

Read full abstract
  • Journal IconAgriculture
  • Publication Date IconMay 10, 2025
  • Author Icon Can Wang + 5
Open Access Icon Open AccessJust Published Icon Just Published
Cite IconCite
Save

Deep Fake Detection

Abstract - Authenticity of Smart Media A method called Deep Fake identification With Machine Learning uses deep learning approaches to enhance the identification of AI-manipulated media. Artificial intelligence (AI) produces incredibly lifelike synthetic movies known as "deep fakes," which can cause political instability, disinformation, and harm to one's reputation. This project uses preprocessing methods like face cropping and frame extraction to analyse video material. While LSTM is used for temporal sequence modelling to categorise movies as real or deepfake, ResNeXt CNN is employed for feature extraction. Real-time detection and increased accuracy are the outcomes of the system's automation of video forensics. It guarantees dependable results and offers users an easy-to-use online interface by utilising deep learning. Key words: LSTM, deepfake detection, facial recognition, video forensics, computer vision, deep learning, ResNeXt, and media authenticity.

Read full abstract
  • Journal IconINTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Publication Date IconMay 10, 2025
  • Author Icon Mr Sanjay M
Just Published Icon Just Published
Cite IconCite
Save

Mental Stress Detection Using Wearable Sensors and Machine Learning

Abstract - This project proposes a real-time stress level detection system using a Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM) layers. It collects biometric data—heart rate (BPM) and SpO₂ levels— from a pulse sensor connected to an ESP8266 microcontroller. The data is transmitted via Wi-Fi to a Python-based backend for processing. The collected signals are normalized usingMinMaxScaler and reshaped to preserve their sequential nature. The preprocessed data is fed into a trained RNN model that classifies stress levels into four categories: No Stress,Medium, High, and Very High. The model uses a softmax output layer and categorical cross-entropy loss foraccurate multi-class classification. Predictions are generated in real time and displayed for immediate feedback. The model was previously trained on a synthetically generated dataset reflecting stressrelated biometric thresholds. The system is scalable and can be integrated into mobile applications using TensorFlow Lite, offering continuous and portable stress monitoring. This approach enables early detection and intervention for stress-related health issues. This innovative system bridges wearable technology with deep learning for efficient mental health monitoring. Its real-time feedback mechanism empowers users to take timely actions to manage stress effectively. Key Words: Stress Detection, RNN, LSTM, Heart Rate, SpO₂, ESP8266, Real-Time Monitoring, Deep Learning

Read full abstract
  • Journal IconINTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Publication Date IconMay 10, 2025
  • Author Icon Benitlin Subha K
Just Published Icon Just Published
Cite IconCite
Save

Real-Time Defect Detection and Carbon Footprint Visualization in Green Construction Using Mobile Augmented Reality and Building Information Modeling

The integration of mobile augmented reality (AR) technology with building information modeling (BIM) has introduced novel solutions for construction management, particularly in real-time defect detection and carbon footprint monitoring. AR technology enables the real-time provision of three-dimensional visual information at construction sites, which, when combined with BIM, facilitates accurate defect identification and feedback. Additionally, BIM provides a scientific basis for planning carbon emission pathways in construction projects. However, existing defect detection and carbon footprint management systems face challenges such as limited accuracy and insufficient real-time capabilities. Current study on green construction primarily focuses on defect detection and carbon footprint calculations, yet most approaches continue to rely on traditional two-dimensional drawings or manual inspection, which fail to meet the real-time demands of construction sites. The absence of an integrated solution leveraging both AR and BIM technologies has constrained their practical application in construction. To address these limitations, this study proposes a real-time defect detection and carbon footprint visualization and path planning system for green construction, based on mobile AR technology and BIM. The system employs AR-based stereo matching for real-time defect identification and utilizes BIM for carbon footprint visualization path planning. This study aims to provide an efficient and accurate approach to defect detection while enhancing the environmental protection level during the construction process through effective carbon footprint management.

Read full abstract
  • Journal IconInternational Journal of Interactive Mobile Technologies (iJIM)
  • Publication Date IconMay 9, 2025
  • Author Icon Yi Liao + 1
Open Access Icon Open AccessJust Published Icon Just Published
Cite IconCite
Save

Advancing Meibography Assessment and Automated Meibomian Gland Detection Using Gray Value Profiles

Objective: This study introduces a novel method for the automated detection and quantification of meibomian gland morphology using gray value distribution profiles. The approach addresses limitations in traditional manual and deep learning-based meibography analysis, which are often time-consuming and prone to variability. Methods: This study enrolled 100 volunteers (mean age 40 ± 16 years, range 18–85) who suffered from dry eye and responded to the Ocular Surface Disease Index questionnaire for scoring ocular discomfort symptoms and infrared meibography for capturing imaging of meibomian glands. By leveraging pixel brightness variations, the algorithm provides real-time detection and classification of long, medium, and short meibomian glands, offering a quantitative assessment of gland atrophy. Results: A novel parameter, namely “atrophy index”, a quantitative measure of gland degeneration, is introduced. Atrophy index is the first instrumental measurement to assess single- and multiple-gland morphology. Conclusions: This tool provides a robust, scalable metric for integrating quantitative meibography into clinical practice, making it suitable for real-time screening and advancing the management of dry eyes owing to meibomian gland dysfunction.

Read full abstract
  • Journal IconDiagnostics
  • Publication Date IconMay 9, 2025
  • Author Icon Riccardo Forni + 8
Open Access Icon Open AccessJust Published Icon Just Published
Cite IconCite
Save

Enhancing Autonomous Drone Navigation: YOLOv5-Based Object Detection and Collision Avoidances

Abstract In recent years, drone technology has seen profound advancements, especially with regards to safe and autonomous operation, which heavily relies on object detection and avoidance capabilities. These autonomously functioning drones can operate in challenging environments for tasks like search and rescue operations, as well as industrial monitoring. The present research focuses on enhancing object detection for autonomous drones by utilizing publicly available image datasets instead of custom images. Datasets like VisDrone, DroneDeploy, and DOTA contain a plethora of stunning, real-life images that make them ideal candidates for improving the accuracy and robustness of object detection models. We propose an optimized method for training the YOLOv5 model to enhance object detection. The collected dataset undergoes evaluation from precision, recall, F1-score, and mAP through both CNN and YOLO models. The findings show that using YOLOv5 deep architecture to implement real-time object detection and avoidance in UAVs is more efficient than traditional CNN approaches. Keywords: Autonomous Drone, CNN, Yolo V5, Sensor Integration, Deep Learning.

Read full abstract
  • Journal IconINTERNATIONAL JOURNAL OF SCIENTIFIC RESEARCH IN ENGINEERING AND MANAGEMENT
  • Publication Date IconMay 9, 2025
  • Author Icon Kalrav Gediya
Just Published Icon Just Published
Cite IconCite
Save

Adaptability Study of an Unmanned Aerial Vehicle Actuator Fault Detection Model for Different Task Scenarios

Unmanned aerial vehicles (UAVs) may encounter actuator faults in diverse flight scenarios, requiring robust fault detection models that can adapt to varying data distributions. To address this challenge, this paper proposes an approach that integrates Domain-Adversarial Neural Networks (DANNs) with a Mixture of Experts (MoE) framework. By employing domain-adversarial learning, the method extracts domain-invariant features, mitigating distribution discrepancies between source and target domains. The MoE architecture dynamically selects specialized expert models based on task-specific data characteristics, improving adaptability to multimodal environments. This integration enhances fault detection accuracy and robustness while maintaining efficiency under constrained computational resources. To validate the proposed model, we conducted flight experiments, demonstrating its superior performance in actuator fault detection compared to conventional deep learning methods. The results highlight the potential of MoE-enhanced domain adaptation for real-time UAV fault detection in dynamic and uncertain environments.

Read full abstract
  • Journal IconDrones
  • Publication Date IconMay 9, 2025
  • Author Icon Lulu Wang + 5
Just Published Icon Just Published
Cite IconCite
Save

A Dynamic Evaluation Method for Collaborative Search Efficiency of Multi-Sonar Systems Under Uncertain Situations

In sonar collaborative search tasks, effectively evaluating the collaborative search efficiency is an important way to measure whether a task can be successful, which can also provide strong support for optimizing search schemes. In complex marine environments, sonar collaboration search faces challenges such as uncertain task scenes and real-time changing situations. Traditional evaluation methods cannot meet the evaluation requirements in these tasks since they do not analyze the involved dynamic modeling process. To bridge this gap, in this paper, we propose a novel evaluation method for sonar collaborative search efficiency based on adaptive information fusion and dynamic deduction. Specifically, we develop an information fusion method for multi-sensor detection based on adaptive weight calculation first, weights are assigned to each sensor based on the real-time changing detection probability to obtain more accurate detection probability fusion results. Then, we introduce the Monte Carlo sampling concept to establish an efficiency evaluation model based on the information fusion results. It discretizes the sonar search path and target motion trajectory in the time and space, and calculates the sonar detection efficiency point by point, which can overcome the challenge of uncertain situation conditions due to the uncertainty of target motion by dynamic spatial-temporal deduction. Compared with the average weighted fusion method, the variance of the proposed adaptive fusion method decreases from 0.01 to 0.0071, which proves its better stability. The results of the one-sample t-test indicate that at the level of α=0.05, there is a significant difference between the average detection probability and the random probability of 0.5, indicating statistical significance. Moreover, we verify the effectiveness of the proposed method in fully-passive and multi-base working modes, and compare the impact of each sonar on the overall detection capability of the multi-sonar system, which also demonstrates the advantages and reliability of the new model.

Read full abstract
  • Journal IconApplied Sciences
  • Publication Date IconMay 9, 2025
  • Author Icon Shizhe Wang + 3
Just Published Icon Just Published
Cite IconCite
Save

A convolutional state-space framework for wind turbine fault diagnosis using hierarchical feature extraction and dynamic state modeling on SCADA system

ABSTRACT Wind turbines are the backbone of renewable energy production but are often subject to mechanical failure that reduces efficiency and increases maintenance costs, as well as downtime. Traditional non-destructive testing and fault diagnosis approaches such as vibration analysis are inflexible and non-adaptive; on the other hand, deep learning-based methods lack interpretability of their practices. To tackle these issues, we present a fault diagnosis framework that combines 1D-CNN and Dynamic state-space model (DSSM) for 1D vibration signals from faulty turbine states. This novel modelling approach combines both local and temporal dependencies which helps improve interpretability and enables real-time fault detection. The efficacy of our approach is validated on a series of wind turbine fault conditions, showing competitive performance compared to classical machine learning models and deep learning models by achieving an average of 97% accuracy across all conditions, with high fault classification accuracy at different operational conditions as well.

Read full abstract
  • Journal IconNondestructive Testing and Evaluation
  • Publication Date IconMay 9, 2025
  • Author Icon Muhammad Irfan + 5
Just Published Icon Just Published
Cite IconCite
Save

Smartphone-based text obtained via passive sensing as it relates to direct suicide risk assessment.

Recent research highlights the dynamics of suicide risk, resulting in a shift toward real-time methodologies, such as ecological momentary assessment (EMA), to improve suicide risk identification. However, EMA's reliance on active self-reporting introduces challenges, including participant burden and reduced response rates during crises. This study explores the potential of Screenomics-a passive digital phenotyping method that captures intensive, real-time smartphone screenshots-to detect suicide risk through text-based analysis. Seventy-nine participants with past-month suicidal ideation or behavior completed daily EMA prompts and provided smartphone data over 28days, resulting in approximately 7.5 million screenshots. Text from screenshots was analyzed using a validated dictionary encompassing suicide-related and general risk language. Results indicated significant associations between passive and active suicidal ideation and suicide planning with specific language patterns. Detection of words related to suicidal thoughts and general risk-related words strongly correlated with self-reported suicide risk, with distinct between- and within-person effects highlighting the dynamic nature of suicide risk factors. This study demonstrates the feasibility of leveraging smartphone text data for real-time suicide risk detection, offering a scalable, low-burden alternative to traditional methods. Findings suggest that dynamic, individualized monitoring via passive data collection could enhance suicide prevention efforts by enabling timely, tailored interventions. Future research should refine language models and explore diverse populations to extend the generalizability of this innovative approach.

Read full abstract
  • Journal IconPsychological medicine
  • Publication Date IconMay 9, 2025
  • Author Icon Brooke A Ammerman + 8
Just Published Icon Just Published
Cite IconCite
Save

Forest Fire Detection Algorithm Based on Improved YOLOv11n

To address issues in traditional forest fire detection models, such as large parameter sizes, slow detection speed, and unsuitability for resource-constrained devices, this paper proposes a forest fire detection method, FEDS-YOLOv11n, based on an improved YOLOv11n model. First, the C3k2 module was redesigned using the FasterBlock module, replacing C3k2 with C3k2-Faster in both the Backbone network and Neck section to achieve a lightweight model design. Second, an EMA attention mechanism was introduced into the C3k2-Faster module in the Backbone, replacing C3k2-Faster with C3k2-Faster-EMA to compensate for the accuracy loss in small-object detection caused by the lightweight design. Third, the original upsampling module in the Neck was replaced with the lightweight dynamic upsampling operator DySample. Finally, the detection head was improved using the SEAM attention module, replacing the original Detect head with SEAMHead, which enables better handling of occluded objects. The experimental results show that compared to YOLOv11n, the proposed FEDS-YOLOv11n achieves improvements of 0.9% in precision (P), 1.9% in recall (R), 2.1% in mean precision at IoU 0.5 (mAP@0.5), and 2.3% in mean precision at IoU 0.5–0.95 (mAP@0.5–0.95). Additionally, the number of parameters is reduced by 21.32%, GFLOPs are reduced by 26.98%, and FPS increases from 48.2 to 71.8. The FEDS-YOLOv11n model ensures high accuracy while maintaining lower computational complexity and faster inference speed, making it suitable for real-time forest fire detection applications.

Read full abstract
  • Journal IconSensors
  • Publication Date IconMay 9, 2025
  • Author Icon Kangqian Zhou + 1
Just Published Icon Just Published
Cite IconCite
Save

A Wearable Wisdom: ABI-Modal Behavioral Biometric Scheme for Smartwatch User Authentication

This work utilizes wearable devices for real-time stress detection and investigates the effectiveness of meditation audio in reducing stress levels after academic exposure. Physiological data, including Interbeat Interval (IBI)-derived Heart Rate Variability (HRV), Blood Volume Pulse(BVP),andelectrodermalactivity(EDA),arecollectedduringthe Montreal Imaging Stress Task (MIST). The stress classification methodology employs an integrated approach using Genetic Algorithm andMutualInformationtoreducefeaturesetredundancy.Itfurtheruses Bayesian optimization to fine-tune machine learning hyperparameters. The results indicate that the combination of EDA, BVP, and HRV achievesthehighestclassificationaccuracyof98.28%and97.02%using the Gradient Boosting (GB) algorithm for 2-level and 3-level stress classification. In contrast, EDA and HRV alone achieve a comparable accuracy of 97.07% and 95.23% for 2-level and 3-level stress classification, respectively. Furthermore, the SHAP Explainable AI (XAI) analysis confirms that HRV and EDA are the most significant features for stress classification. The study also finds evidence that listening to meditation audio reduces stress levels. These findings highlight the potential of wearable technology combined with machine learning for real-time stress monitoring and management in academic environments.

Read full abstract
  • Journal IconInternational Journal of Scientific Research in Science, Engineering and Technology
  • Publication Date IconMay 9, 2025
  • Author Icon Bandaru Chennakesava Naidu + 4
Just Published Icon Just Published
Cite IconCite
Save

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers