• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    • Journal finder

      AI-powered journal recommender

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Sign In
Paper
Search Paper
Cancel
Pricing Sign In
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link
Discovery Logo menuClose menu
  • My Feed iconMy Feed
  • Search Papers iconSearch Papers
  • Library iconLibrary
  • Explore iconExplore
  • Ask R Discovery iconAsk R Discovery Star Left icon
  • Chat PDF iconChat PDF Star Left icon
  • Chrome Extension iconChrome Extension
    External link
  • Use on ChatGPT iconUse on ChatGPT
    External link
  • iOS App iconiOS App
    External link
  • Android App iconAndroid App
    External link
  • Contact Us iconContact Us
    External link

Related Topics

  • Level Of Expertise
  • Level Of Expertise
  • Art Experts
  • Art Experts
  • Expert Practitioners
  • Expert Practitioners
  • Expert Process
  • Expert Process
  • Trained Experts
  • Trained Experts
  • Engineering Expertise
  • Engineering Expertise

Articles published on Visual expertise

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
345 Search results
Sort by
Recency
  • New
  • Research Article
  • 10.17586/2226-1494-2025-25-6-1107-1116
A multimodal approach for depression detection using semi-automatic data annotation and deterministic machine learning methods
  • Dec 23, 2025
  • Scientific and Technical Journal of Information Technologies, Mechanics and Optics
  • A N Velichko + 1 more

A trending task of automatic psycho-emotional human state detection was studied in this work. A scientific interest to researches devoted to the automatic multimodal depression detection can arise out of the widespread of anxiety-depressive disorders and difficulties of their detection in primary health care. A specificity of the task was caused by its complexity, lack of data, imbalance of classes and inaccuracies in it. Comparative researches show that classification results on semi-automatic annotated data are higher than ones on automatic-annotated data. The proposed approach for depression detection combines a semi-automatic data annotation and deterministic machine learning methods with the utilization of several feature sets. To build our models, we utilized the multimodal Extended Distress Analysis Interview Corpus (E-DAIC) which consists of audio recordings, automatically extracted from these audio recordings texts and video feature sets extracted from video recordings as well as annotation including Patient Health Questionnaire (PHQ-8) scale for each recording. A semi-automatic annotation makes it possible to get the exact time stamps and speech texts to reduce the noisiness in the training data. In the proposed approach we use several feature sets, extracted from each modality (acoustic expert feature set eGeMAPS and neural acoustic feature set DenseNet, visual expert feature set OpenFace and text feature set Word2Vec). A complex processing of these features minimizes the effect of class imbalance in the data on classification results. Experimental researches with the use of mostly expert features (DenseNet, OpenFace, Word2Vec) and deterministic machine learning classification methods (Catboost) which have the property of interpretability of classification results yielded the experimental results on the E-DAIC corpus which are comparable with the existing ones in the field (68.0 % and 64.3 % for Weighted F1-measure (WF1) and Unweighted Average Recall (UAR) accordingly). The usage of a semi-automatic annotation approach and modalities fusion improved both quality of annotation and depression detection comparing to the unimodal approaches. More balanced classification results are achieved. The usage of deterministic machine learning classification methods based on decision trees allows us to provide an interpretability analysis of the classification results in the future due to their interpretability feature. Other methods of results interpretation like SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME) also can be used for this purpose.

  • Research Article
  • 10.1098/rspb.2025.2005
Super-recognizers sample visual information of superior computational value for facial recognition
  • Nov 5, 2025
  • Proceedings of the Royal Society B: Biological Sciences
  • James D Dunn + 5 more

Super-recognizers—individuals with exceptionally high face recognition abilities—are a key exemplar of biological visual expertise. Recent eye-tracking evidence suggests that their expertise may be driven by exploratory viewing behaviour during learning, but it remains unclear whether this perceptual sampling is functional for face identity processing. Here, we develop a novel approach to quantify the computational value of face information samples and test the utility of information sampling in super-recognizers. Using measurements of eye gaze behaviour, we reconstructed the retinal information that participants acquired while learning new faces. We then evaluated the computational value of this information for face identity processing using nine deep neural networks (DNNs) optimized for this task. Identity matching accuracy improved across all DNNs when using visual information sampled by super-recognizers compared with typical viewers. Interestingly, this advantage could not be explained by the greater quantity of information alone, and so differences in both the quantity and quality of face information encoded on the retina contribute to individual differences in face processing ability. These findings support accounts of visual expertise that emphasize attentional mechanisms and the role of active visual exploration in learning.

  • Research Article
  • 10.1177/03010066251378983
Visual expertise for aerial- and ground-views of houses: No evidence for mental rotation, but experts were more diligent than novices.
  • Oct 7, 2025
  • Perception
  • Emil Skog + 2 more

Ordnance Survey (OS) remote sensing surveyors have extensive experience with aerial views of scenes and objects. Building on our previous work with this group, we investigated whether their expertise influenced performance on a same/different object recognition task involving houses. In an online study, these stimuli were shown from both familiar ground-level viewpoints and from what is for most people, unfamiliar aerial viewpoints. OS experts and novices compared achromatic, disparity-free images with aerial perspectives rotated around the clock against canonical ground-views; we measured response times (RTs) and sensitivities (d'). In two 'grounding' tasks using rotated letters, we found conventional outcomes for both groups, validating the online approach. Experiment 1 (non-matching letters) yielded ceiling-level performance with no signs of mental rotation, consistent with a feature-based recognition strategy. In Experiment 2 (mirror reversed letters), both groups showed orientation-dependent performance, but experts exhibited a speed-accuracy trade-off, responding more cautiously than novices. In the main house task (Experiment 3), we found (a) the same speed-accuracy trade-off observed in Experiment 2, (b) substantially longer RTs overall, and (c) no evidence for mental rotation in either group, mirroring Experiment 1. Contrary to our earlier findings on aerial depth perception, expertise in remote sensing did not yield a distinctive recognition strategy for the experiments here. However, experts displayed more diligent tactics in Experiments 2 and 3. We suggest that all participants in Experiment 3 engaged in cognitively challenging feature comparisons across viewpoints, presumably supported by volumetric or surface-connected prototypes of houses as the basis for feature comparisons.

  • Research Article
  • 10.5194/isprs-archives-xlviii-m-9-2025-277-2025
Photogrammetric Localisation of Electromagnetic Sensors for Detecting Anomalies in Heritage Internal Wood Structures
  • Oct 1, 2025
  • The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences
  • Maria Chizhova + 3 more

Abstract. The strength estimation of built-in historical timber requires extensive investigation of internal wooden structures and properties, which cannot be mapped using only optical surveying methods. In most cases, additional equipment is required, including invasive techniques such as drilling resistance technology and sensitive methods such as ultrasonics or electromagnetic (EM) sensor technology. Thus, the localisation of a measuring device within a whole timber structure and the uniform registration of internal and external wooden structures is missing. In this article, we investigate the capability to localise the EM sensor, which will be used in future research activities for in-depth analysis of internal timber structures. The position of the EM sensor will be captured during the 3D timber imaging using different optical methods to provide a reference between the external wood surface and digitised internal structures. The data quality is compared and evaluated according to the measurement method using geometric features and visual expertise.

  • Research Article
  • 10.1007/s11548-025-03508-9
Visualization support for remote collaborative aneurysm treatment planning.
  • Sep 8, 2025
  • International journal of computer assisted radiology and surgery
  • Rebecca Preßler + 3 more

Cerebral aneurysms are blood-filled bulges that form at weak points in blood vessel walls, and their rupture can lead to life-threatening consequences. Given the high risk associated with these aneurysms, thorough examination and analysis are essential for determining appropriate treatment. While existing tools such as ANEULYSIS and its web-based counterpart WEBANEULYSIS provide interactive means for analyzing simulated aneurysm data, they lack support for collaborative analysis, which is crucial for enhancing interpretation and improving treatment decisions in medical team meetings. To address this limitation, we introduce WEBCOANEULYSIS, a novel collaborative tool for aneurysm data analysis. WEBCOANEULYSIS builds upon the established visualization techniques of WEBANEULYSIS while incorporating innovative collaborative features to facilitate joint analysis and discussion among medical professionals. The tool was evaluated by three physicians and two visualization experts, who assessed its usability, functionality, and effectiveness in supporting collaborative decision-making. The evaluation results were overwhelmingly positive. The physicians particularly appreciated the tool's ability to provide a clear overview of aneurysm data while maintaining ease of use despite its complex functionality. Although minor suggestions for improvement were made, the overall feedback highlighted the benefits of WEBCOANEULYSIS in improving collaborative analysis and treatment planning. WEBCOANEULYSIS enhances aneurysm data analysis by enabling real-time collaboration among medical professionals, thereby supporting more informed treatment decisions. Beyond its primary application in risk analysis and treatment planning, the tool also has potential benefits for patient education and the training of new doctors, making it a valuable addition to the field of medical visualization and decision support systems.

  • Research Article
  • 10.1109/tvcg.2024.3431930
RSVP for VPSA : A Meta Design Study on Rapid Suggestive Visualization Prototyping for Visual Parameter Space Analysis.
  • Sep 1, 2025
  • IEEE transactions on visualization and computer graphics
  • Manfred Klaffenboeck + 4 more

Visual Parameter Space Analysis (VPSA) enables domain scientists to explore input-output relationships of computational models. Existing VPSA applications often feature multi-view visualizations designed by visualization experts for a specific scenario, making it hard for domain scientists to adapt them to their problems without professional help. We present RSVP, the Rapid Suggestive Visualization Prototyping system encoding VPSA knowledge to enable domain scientists to prototype custom visualization dashboards tailored to their specific needs. The system implements a task-oriented, multi-view visualization recommendation strategy over a visualization design space optimized for VPSA to guide users in meeting their analytical demands. We derived the VPSA knowledge implemented in the system by conducting an extensive meta design study over the body of work on VPSA. We show how this process can be used to perform a data and task abstraction, extract a common visualization design space, and derive a task-oriented VisRec strategy. User studies indicate that the system is user-friendly and can uncover novel insights.

  • Research Article
  • Cite Count Icon 1
  • 10.33492/jrs-d-25-3-2702152
Road Safety Awareness Using Internet Meme Posts: The Role of Visual Design in Effective Communication
  • Aug 20, 2025
  • Journal of Road Safety
  • Bhaskar Mishra + 1 more

Road safety, a major cause of death and injury worldwide, is a public health and social development issue. Since 2023, Uttarakhand police have used an “Internet Meme Strategy” on social media to promote road safety. The study aims to evaluate the Uttarakhand police department’s “Safety on the Roads” campaign via its Facebook page, which uses internet meme posts and non-meme posts to raise awareness about road safety issues, including speeding, drink driving, and red light violations. The objectives were: establish how the Uttarakhand police department uses internet memes about road safety on its Facebook page to promote the “Safety on the Roads” campaign, how followers perceive memes visually and verbally, and how the insights from meme posts differ from non-meme posts. The study used a quantitative, descriptive-comparative design; a survey (assessed awareness, behaviour change, responsibility, and visual and verbal features) (n=384, aged 18-29 years: 59.6% male; 40.4% female). In addition, all internet posts were evaluated for visual and verbal characteristics by university-educated visual experts (n=5) affiliated with the faculties of arts and cultural studies at Indian universities. From January to December 2023, 25 road safety posts (i.e., 16 internet memes and 9 non-memes) were analysed. The independent variable was the type of content (internet meme posts vs. non-meme posts), while the dependent variables included perception (visual and verbal characteristics), exposure, and engagement. A series of independent sample t-test found that internet meme posts had significantly higher reach, impressions, and engagement compared to non-meme posts (p<.05). Internet meme posts, particularly posts that use humour and are widely shared, can be powerful communication tools because they are concise, relatable and may contribute to reducing both reported and unreported crashes.

  • Research Article
  • Cite Count Icon 1
  • 10.1186/s13293-025-00747-y
The role of visual experience in haptic spatial perception: evidence from early blind, late blind, and sighted individuals.
  • Aug 19, 2025
  • Biology of sex differences
  • Lara A Coelho + 5 more

There is contradictory evidence on the effect that visual experience has on haptic abilities. Indeed, some studies have documented that a lack of vision (blindness) results in decreased haptic perception, whereas other studies report an enhanced haptic ability in blind individuals. To examine the role of vision in haptic spatial processing, we recruited early blind, late blind, and sighted participants. Each participant completed a haptic task in which they explored a two-piece LEGO model for eight seconds before searching for the same pieces in a bowl of distractors. Our results showed that blind individuals made more errors than sighted participants. Furthermore, early blind participants performed worse than both late blind and sighted participants, who performed similarly. These findings highlight the importance that vision plays in the development of accurate haptic spatial perception. Additionally, we investigated whether the commonly reported male advantage in haptic tasks depends on visual experience. Our results showed better performance by males in all groups when compared to females. This result suggests that sex differences in haptic spatial processing are a fundamental characteristic of human sensory function, independent of visual experience.Highlights No study has investigated if the previously identified male advantage in haptic spatial processing is mediated by visual experience. Blind participants made more errors than sighted participants; early blind performed the worst. The findings suggest vision is crucial for the development of accurate haptic spatial perception. There was a consistent male advantage in haptic performance across all visual experience groups. Sex differences in haptic spatial ability appear to be independent of visual expertise. Plain language summarySome researchers have suggested that being blind reduces abilities in their other senses, while others believe that a lack of vision can improve them. To further understand which is true, we investigated whether the haptic system-the combination of touch and proprioception (awareness of where the body is in space)-is affected by blindness. To do this, we tested people who were blind from birth (early blind), people who became blind later in life (late blind), and people who can see (sighted) on a simple haptic task. In the task, participants felt a small LEGO model with their hands for eight seconds. Then, they had to find the same LEGO pieces in a bowl filled with other, distractor pieces-using only haptics. We found that blind participants made more mistakes than sighted participants. Those who were blind from birth had the most difficulty. People who became blind later in life performed similarly to sighted individuals. This suggests that vision plays an important role in developing accurate haptic perception. As previous work has shown that males outperform females on haptic tasks, we also investigated whether those differences depended on vision. We found that males performed better than females in all groups, regardless of whether they were blind or sighted. This suggests that sex differences in haptic ability may be a basic feature of how our senses work and not just related to vision.

  • Research Article
  • 10.1371/journal.pone.0330284
Visual discrimination training increases the speed stimulus processing and leads to an earlier onset of stimulus encoding
  • Aug 18, 2025
  • PLOS One
  • Camila Bustos + 3 more

Wide experience with complex visual stimuli results in better performance and faster responses in object discrimination, categorization, and identification through perceptual learning and expertise. Visual experts exhibit an earlier onset of the availability of stimulus information for encoding and a reduction of the encoding duration required for discrimination and individuation. However, it is still unresolved whether perceptual learning and expertise shapes the speed of perceptual processing in the first milliseconds after stimulus onset. Twenty seven participants developed perceptual learning and expertise through discrimination of pairs of Kanji stimuli across six sessions. Discrimination sensitivity was evaluated at four training levels with encoding durations between 17 and 1000 ms. Behavioral results show a gradual increase in sensitivity and a reduction in encoding duration required for a given performance with discrimination training. A shifted exponential function fitted to the sensitivity data revealed that training leads to a faster rate of performance change with encoding durations, suggesting increases in the speed of information extraction, as well as an earlier availability of stimulus information for encoding, suggesting an earlier onset of information extraction. Interestingly, the increase in the rate of performance paralleled that of sensitivity with training, suggesting an association with perceptual learning and expertise. Besides, the earlier availability of stimulus information is achieved after two training sessions, likely reflecting the acquisition of stimuli familiarity. The faster speed of information extraction and the earlier stimulus information extraction for encoding, likely contribute to faster responses and higher performance, typical of perceptual experts in object discrimination and individuation. These findings provide additional evidence for the outcome of discrimination training on stimulus processing in the first milliseconds after stimulus onset.

  • Research Article
  • 10.1038/s41598-025-14497-9
Superior monocular visual function but compromised binocular balance in precision shooters compared to age and refraction matched controls.
  • Aug 6, 2025
  • Scientific reports
  • Izabela K Garaszczuk + 2 more

Shooting sports demand exceptional visual performance, yet detailed assessments of visual function in precision shooters remain limited. This cross-sectional study evaluated 28 pistol and rifle shooters and 20 age- and refractive-error-matched non-athletic controls. Participants underwent comprehensive visual assessments, including tests of visual acuity (VA), Vernier acuity, contrast sensitivity, binocular vision, accommodation, ocular biometry, perimetry, and eye movement tracking. A subgroup of national-level athletes was also analyzed. Compared to controls, shooters demonstrated superior near VA (-0.08 ± 0.06 vs. 0.03 ± 0.07 logMAR; p = 0.003), binocular Vernier acuity (5.4 ± 3.2 vs. 8.7 ± 5.1 arcsec; p = 0.032), and dominant eye contrast sensitivity (p = 0.005). National-level shooters showed fewer gaze shifts (p = 0.044), more stable fixation, and better stereoacuity (25 vs. 35 arcsec; p = 0.005). Modality-specific differences were observed: pistol shooters exhibited better distance acuity and central field sensitivity, while rifle shooters-despite being older-performed better in near VA. However, covering one eye to avoid diplopia, which is inherent in precision shooting, may cause suppression of the covered eye when performed frequently and for prolonged periods. This ultimately may explain why shooting experience correlates with reduced binocular balance and a worse near point of convergence (r = 0.335, p = 0.020). These findings suggest that visual expertise in precision shooting is linked to task-specific visual adaptations. Tailored visual training programs may enhance performance and mitigate training-induced imbalances.

  • Open Access Icon
  • Research Article
  • 10.2196/70073
MindLAMPVis as a Co-Designed Clinician-Facing Data Visualization Portal to Integrate Clinical Observations From Digital Phenotyping in Schizophrenia: User-Centered Design Process and Pilot Implementation
  • Jun 10, 2025
  • JMIR Formative Research
  • Karthik Sama + 7 more

BackgroundThe potential of digital mental health to transform care delivery in low- and middle-income countries is well established. However, there remains the need to clinically and organically adapt current tools to local needs. This paper explores the process of creating a novel data visualization system for a digital mental health app and outlines the necessary steps in the process. This work demonstrates co-design involving collaboration between teams across geographies and disciplines based on clinicians’ requirements.ObjectiveThis study aims to co-design a visualization dashboard app for clinicians through a design study with a multidisciplinary team consisting of clinicians in Boston and Bangalore, mindLAMP software developers in Boston, and computer scientists with visualization expertise in Bangalore. The app is designed to visualize derivatives of both active and passive data of patients with schizophrenia to support the research contexts of digital psychiatry clinics in India.MethodsThe mindLAMP app, already used in many countries today, is adapted to offer a new clinician-facing data visualization portal, mindLAMPVis. The novel web-based portal is designed to improve clinical integration for use in India. After building the new portal, the insights from this new portal are corroborated with known clinical observations of relapse using comparative visualization. The data were taken from the mindLAMP app and processed using multivariate analysis and dimensionality reduction to make it easy and manageable for clinicians to analyze. These techniques are integrated in mindLAMPVis, thus making it a locally co-designed, developed, and deployed tool. A feasibility study of the pilot implementation of the app was completed through a domain expert study with clinician-driven case studies.ResultsTo assess the system, we preloaded data from 24 patients with schizophrenia, including those with relapses. Through case examples focusing on relapse risk prediction in schizophrenia, mindLAMPVis is used to identify different visualization methods to compare different analytical results for each patient. In partnership with clinicians for co-designing the app, we explored the feasibility of a comparative visualization tool for discovering patterns across different time stamps for a single patient or any patterns across patients related to the relapse episode. As an example of reverse translation, mindLAMPVis offers new features that complement the original features of mindLAMP, highlighting the mutual benefit of software adaptation and collaborative design.ConclusionsmindLAMPVis is a tailored tool designed for use in India, but it can aid in identifying and comparing behavioral patterns that may indicate clinical risk for patients in any country. mindLAMPVis offers an example of how, through technical design, feedback, and real-world clinical testing, it is feasible to adapt current software tools to meet local needs and even exceed the use cases of the original technology. mindLAMPVis also successfully incorporates both active and passive digital phenotyping data.

  • Research Article
  • 10.1145/3720546
AdGPT: Explore Meaningful Advertising with ChatGPT
  • Apr 18, 2025
  • ACM Transactions on Multimedia Computing, Communications, and Applications
  • Jiannan Huang + 3 more

Advertising is pervasive in everyday life. Some advertisements are not as readily comprehensible, as they convey a deeper message or purpose, which is referred to as “meaningful advertising.” These ads often aim to create an emotional connection with the audience or promote a social cause. Developing a method for automatically understanding meaningful advertising would be advantageous for the dissemination and creation of such ads. However, current models of ad understanding primarily focus on the superficial aspects of images. In this article, we introduce AdGPT, a model that leverages visual expert analysis to guide Large Language Models (LLMs) in generating adaptive reasoning chains. Informed by these chains of thought, the model can intelligently comprehend meaningful ads regarding category, content, and sentiment. To assess the effectiveness of our approach, we extract a subset of meaningful ads from the widely used Pitt’s ad images for analysis. Beyond employing traditional ad understanding metrics to evaluate the LLMs’ comprehensive ad comprehension, we also develop a novel generative metric that aligns with user study evaluations for consistent performance assessment. Experiments show that our methods outperform existing state-of-the-art (SOTA) approaches directly linking visual expert models and LLMs and large-scale visual-language models. Code is available at https://github.com/Rbrq03/AdGPT .

  • Research Article
  • Cite Count Icon 2
  • 10.1111/anec.70082
Tracing Visual Expertise in ECG Interpretation: An Eye-Tracking Pilot Study.
  • Apr 18, 2025
  • Annals of noninvasive electrocardiology : the official journal of the International Society for Holter and Noninvasive Electrocardiology, Inc
  • Alessandro Bortolotti + 12 more

Visual expertise is pivotal for accurate ECG interpretation. We aimed to identify and measure expertise-based differences in visual search patterns, cognitive load, and diagnostic accuracy during ECG analysis using eye-tracking technology. First- to third-year residents and board-certified expert cardiologists interpreted ECGs of patients with suspected acute coronary syndrome, while eye-tracking glasses recorded fixation count, duration, and pupil dilation. Diagnostic accuracy and cognitive load via NASA Task Load Index were analyzed. Heatmaps illustrated relationships between cognitive load, perceived workload, and self-assessed performance across experience levels and ECG task complexities. Expert readers interpreted ECGs significantly faster than residents (107.6 ± 32.8 vs. 205.31 ± 57.43 s; p < 0.001) and demonstrated higher diagnostic accuracy across all levels of task difficulty (p < 0.001). Eye-tracking analysis revealed that experts exhibited fewer fixations (67.7 ± 25.7 vs. 143.7 ± 29.9; p < 0.001) and longer fixation durations (3.9 ± 0.7 vs. 3.2 ± 1 s; p = 0.032) than residents. Experts also showed lower pupil dilation changes (4.8% ± 2% vs. 10.5% ± 4.2%; p = 0.015). Increased task difficulty was associated with greater pupil dilation, particularly among novices (mean pupil dilation for difficult tasks 13.4% ± 4.1% vs. 7.3% ± 2.3% for easy tasks; p = 0.008), indicating higher cognitive demand. Experts maintained superior self-assessed performance (8 ± 0 vs. 7 ± 1.2; p = 0.009) and reported lower perceived negative workload (4.5 ± 1.45 vs. 6 ± 0.55; p = 0.041). In this pilot study, expert readers achieved faster and more accurate diagnoses, exhibiting more efficient visual search patterns and lower cognitive load. Pending external validation, our findings suggest that ECG training programs should focus on developing targeted visual techniques, cognitive efficiency, and adaptive coping strategies to enhance accurate interpretation.

  • Open Access Icon
  • Research Article
  • 10.18264/eadf.v15i1.2386
Development of an Interface for Visualizing Engagement Profiles Created from Educational Data Grouping
  • Apr 14, 2025
  • EaD em Foco
  • Pamella Letícia Silva De Oliveira + 2 more

The distance learning modality faces challenges in improving teaching efficiency, reducing student isolation, and enhancing support technologies. Research focuses on student engagement, but large class sizes make individual tracking difficult. This study aimed to develop an interface for visualizing engagement profiles based on clustered educational data. The Design Science Research (DSR) methodology was used, which involved: 1) problem investigation through interviews with teachers; 2) development, selecting engagement variables, clustering algorithms, and visualization metaphors; 3) evaluations, with feedback from teachers and data visualization experts. Key findings include: 1) the need for tracking tools and the importance of forums, as mentioned by teachers; 2) the “what-why-how” structure for selecting the appropriate visualization; 3) features to ensure greater usability in dashboards, such as reducing scroll and grouping visualizations by information type, as pointed out by experts. Keywords: Distance education. Engagement. Data visualization. Usability.

  • Open Access Icon
  • Research Article
  • 10.1609/aaai.v39i7.32718
Eve: Efficient Multimodal Vision Language Models with Elastic Visual Experts
  • Apr 11, 2025
  • Proceedings of the AAAI Conference on Artificial Intelligence
  • Miao Rang + 5 more

Multimodal vision language models (VLMs) have made significant progress with the support of continuously increasing model sizes and data volumes. Running VLMs on edge devices has become a challenge for their widespread application. There are several efficient VLM efforts, but they often sacrifice linguistic capabilities to enhance multimodal abilities, or require extensive training. To address this quandary, we introduce the innovative framework of Efficient Vision Language Models with Elastic Visual Experts (Eve). By strategically incorporating adaptable visual expertise at multiple stages of training, Eve strikes a balance between preserving linguistic abilities and augmenting multimodal capabilities. This balanced approach results in a versatile model with only 1.8B parameters that delivers significant improvements in both multimodal and linguistic tasks. Notably, in configurations below 3B parameters, Eve distinctly outperforms in language benchmarks and achieves state-of-the-art results in VLM Benchmarks. Additionally, its multimodal accuracy outstrips that of the larger 7B LLaVA-1.5 model.

  • Research Article
  • 10.1177/03010066251322631
Face and word superiority effects: Parallel effects of visual expertise.
  • Mar 26, 2025
  • Perception
  • Marko Chi-Wei Tien + 2 more

There are several studies that compare perception for written words and faces. However, many draw conclusions from different experimental paradigms, complicating direct comparison between these stimuli. Such comparisons are of interest because of hypotheses based on neuroimaging and neuropsychological data that face and word processing may have common underlying mechanisms and neural substrates. To facilitate such comparisons, we created a novel paradigm studying face recognition that closely resembles the word-superiority test, in which a letter is more easily identified when it is embedded in a whole word than when seen in isolation or in an unpronounceable random string of letters. Forty subjects each completed both of our tests. In the traditional word-superiority test, they briefly saw a word, a pseudoword, or a nonword, then a single test letter, and were asked if the letter had been part of the initial stimulus. In the face-superiority test, they briefly saw a learned, new, or scrambled face initially, then a test facial feature in isolation, and were asked to respond whether the feature had been part of the initial stimulus. For both categories of stimuli, there were similar differences between real, pseudo-, and non-stimuli. Accuracy was lower for non-stimuli compared to pseudo- and real stimuli, which in turn did not differ from each other. Response latency was greater for non-stimuli compared to pseudo-stimuli, which in turn was greater than real stimuli. Bivariate analyses revealed significant correlations between interstimulus trials for reaction times. Our study replicated a face superiority effect utilizing a similar methodology to the word-superiority test. Additionally, response latencies follows similar patterns in the recognition of written words and faces.

  • Open Access Icon
  • Research Article
  • Cite Count Icon 1
  • 10.1007/s12149-025-02038-3
Fully automatic categorical analysis of striatal subregions in dopamine transporter SPECT using a convolutional neural network
  • Mar 16, 2025
  • Annals of Nuclear Medicine
  • Thomas Buddenkotte + 4 more

ObjectiveTo provide fully automatic scanner-independent 5-level categorization of the [123I]FP-CIT uptake in striatal subregions in dopamine transporter SPECT.MethodsA total of 3500 [123I]FP-CIT SPECT scans from two in house (n = 1740, n = 640) and two external (n = 645, n = 475) datasets were used for this study. A convolutional neural network (CNN) was trained for the categorization of the [123I]FP-CIT uptake in unilateral caudate and putamen in both hemispheres according to 5 levels: normal, borderline, moderate reduction, strong reduction, almost missing. Reference standard labels for the network training were created automatically by fitting a Gaussian mixture model to histograms of the specific [123I]FP-CIT binding ratio, separately for caudate and putamen and separately for each dataset. The CNN was trained on a mixed-scanner subsample (n = 1957) and tested on one independent identically distributed (IID, n = 1068) and one out-of-distribution (OOD, n = 475) test dataset.ResultsThe accuracy of the CNN for the 5-level prediction of the [123I]FP-CIT uptake in caudate/putamen was 80.1/78.0% in the IID test dataset and 78.1/76.5% in the OOD test dataset. All 4 regional 5-level predictions were correct in 54.3/52.6% of the cases in the IID/OOD test dataset. A global binary score automatically derived from the regional 5-scores achieved 97.4/96.2% accuracy for automatic classification of the scans as normal or reduced relative to visual expert read as reference standard.ConclusionsAutomatic scanner-independent 5-level categorization of the [123I]FP-CIT uptake in striatal subregions by a CNN model is feasible with clinically useful accuracy.

  • Research Article
  • 10.1038/s41598-025-88178-y
Specific visual expertise reduces susceptibility to visual illusions
  • Mar 13, 2025
  • Scientific Reports
  • Radoslaw Wincza + 6 more

Extensive exposure to specific kinds of imagery tunes visual perception, enhancing recognition and interpretation abilities relevant to those stimuli (e.g. radiologists can rapidly extract important information from medical scans). For the first time, we tested whether specific visual expertise induced by professional training also affords domain-general perceptual advantages. Experts in medical image interpretation (n = 44; reporting radiographers, trainee radiologists, and certified radiologists) and a control group consisting of psychology and medical students (n = 107) responded to the Ebbinghaus, Ponzo, Müller-Lyer, and Shepard Tabletops visual illusions in forced-choice tasks. Our results show that medical image experts were significantly less susceptible to all illusions except for the Shepard Tabletops, demonstrating superior perceptual accuracy. These findings could possibly be attributed to a stronger local processing bias, a by-product of learning to focus on specific areas of interest by disregarding irrelevant context in their domain of expertise.

  • Research Article
  • 10.1186/s12947-025-00338-2
Performance of a point-of-care ultrasound platform for artificial intelligence-enabled assessment of pulmonary B-lines
  • Mar 3, 2025
  • Cardiovascular Ultrasound
  • Ashkan Labaf + 5 more

BackgroundThe incorporation of artificial intelligence (AI) into point-of-care ultrasound (POCUS) platforms has rapidly increased. The number of B-lines present on lung ultrasound (LUS) serve as a useful tool for the assessment of pulmonary congestion. Interpretation, however, requires experience and therefore AI automation has been pursued. This study aimed to test the agreement between the AI software embedded in a major vendor POCUS system and visual expert assessment.MethodsThis single-center prospective study included 55 patients hospitalized for various respiratory symptoms, predominantly acutely decompensated heart failure. A 12-zone protocol was used. Two experts in LUS independently categorized B-lines into 0, 1–2, 3–4, and ≥ 5. The intraclass correlation coefficient (ICC) was used to determine agreement.ResultsA total of 672 LUS zones were obtained, with 584 (87%) eligible for analysis. Compared with expert reviewers, the AI significantly overcounted number of B-lines per patient (23.5 vs. 2.8, p < 0.001). A greater proportion of zones with > 5 B-lines was found by the AI than by the reviewers (38% vs. 4%, p < 0.001). The ICC between the AI and reviewers was 0.28 for the total sum of B-lines and 0.37 for the zone-by-zone method. The interreviewer agreement was excellent, with ICCs of 0.92 and 0.91, respectively.ConclusionThis study demonstrated excellent interrater reliability of B-line counts from experts but poor agreement with the AI software embedded in a major vendor system, primarily due to overcounting. Our findings indicate that further development is needed to increase the accuracy of AI tools in LUS.

  • Research Article
  • 10.1007/s11218-025-10036-6
Teacher gaze and attitudes toward student gender: evidence from eye tracking and implicit association tests
  • Mar 3, 2025
  • Social Psychology of Education
  • Sylvia Gabel + 3 more

Previous research has examined teacher attitudes toward student gender and teacher eye movements when looking at girls and boys in classrooms. However, to date, these two lines of research are rather separated. To better understand the co-occurrence of visual and attitudinal preferences, we investigated whether pre-service teachers’ attitudes are associated with their selective attention allocation toward girls and boys. Grounded in the cognitive theory of visual expertise, this multi-method study invited n = 105 pre-service teachers to watch a classroom video while their gaze was recorded. In addition, feeling thermometers measured their explicit gender attitudes and an implicit association test (IAT) measured their implicit gender attitudes. Findings revealed that female and male teachers implicitly and explicitly favored girls over boys. The results also demonstrated that, independent of teacher gender, girls were fixated more frequently than boys. When examining the correlation between attitudes and fixations, the study found that pre-service teachers’ implicit attitudes and their number of fixations on girls were positively correlated. These results confirm the assumption that attention tends to be directed more on information that is consistent (rather than inconsistent) with underlying teacher attitudes, especially in complex tasks, possibly to reduce mental effort. Future research can consider the context of the observation (language lessons), as teachers’ expectations in different disciplinary fields and observation contexts may influence the co-occurrence of attitudes and gaze in the classroom. Further directions on the use of eye tracking as a tool to reflect on gender biases are discussed.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2026 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers