• All Solutions All Solutions Caret
    • Editage

      One platform for all researcher needs

    • Paperpal

      AI-powered academic writing assistant

    • R Discovery

      Your #1 AI companion for literature search

    • Mind the Graph

      AI tool for graphics, illustrations, and artwork

    Unlock unlimited use of all AI tools with the Editage Plus membership.

    Explore Editage Plus
  • Support All Solutions Support
    discovery@researcher.life
Discovery Logo
Paper
Search Paper
Cancel
Ask R Discovery
Explore

Feature

  • menu top paper My Feed
  • library Library
  • translate papers linkAsk R Discovery
  • chat pdf header iconChat PDF
  • audio papers link Audio Papers
  • translate papers link Paper Translation
  • chrome extension Chrome Extension

Content Type

  • preprints Preprints
  • conference papers Conference Papers
  • journal articles Journal Articles

More

  • resources areas Research Areas
  • topics Topics
  • resources Resources
git a planGift a Plan

Human Observers Research Articles

  • Share Topic
  • Share on Facebook
  • Share on Twitter
  • Share on Mail
  • Share on SimilarCopy to clipboard
Follow Topic R Discovery
By following a topic, you will receive articles in your feed and get email alerts on round-ups.
Overview
6856 Articles

Published in last 50 years

Related Topics

  • Human Observer Performance
  • Human Observer Performance
  • Observer Model
  • Observer Model
  • Ideal Observer
  • Ideal Observer

Articles published on Human Observers

Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
6659 Search results
Sort by
Recency
Misreporting of Freezing Fog During Snowfall Conditions in U.S. METAR Observations

Abstract Misreported weather conditions at airports can cause significant and unnecessary flight delays and cancellations, while increasing costs to the airlines. In 2022, updates to the Federal Aviation Administration (FAA) Holdover Time Tables for aircraft ground deicing operations included guidance for snow (SN) mixed with freezing fog (FZFG). Holdover Time Tables provide information on the length of time (i.e., holdover time) anti-icing fluids will protect the aircraft prior to takeoff under various winter weather conditions. The new holdover times for SN mixed with FZFG are significantly shorter than the holdover times for SN or FZFG reported individually. Prior to the introduction of this guidance, pilots would often assess the SN and FZFG conditions individually and use the most conservative holdover time between the two weather conditions. The new guidance has led pilots and ground deicing crews to express concern that FZFG conditions are often reported with SN when FZFG isn’t present. To assess this, one-minute-observation data from select ASOS locations prone to SN and FZFG conditions were analyzed to determine if a FZFG signal could be detected using measurements other than visibility during SN conditions. Additionally, Meteorological Aerodrome Reports (METARs) from two nearly co-located airports (one in the U.S. and one in Canada) were analyzed since Canada relies on human observers to report obscurations, including FZFG. The outcome of both methods indicates a significant number (~85%) of misreported FZFG reports during SN conditions and provides a basis for improving the automated weather-reporting algorithms.

Read full abstract
  • Journal IconJournal of Applied Meteorology and Climatology
  • Publication Date IconMay 12, 2025
  • Author Icon Scott D Landolt + 7
Just Published Icon Just Published
Cite IconCite
Save

Zooming in and out: Selective attention modulates color signals in early visual cortex for narrow and broad ranges of task-relevant features.

Research on feature-based attention has shown that selecting a specific visual feature (e.g., the color red) results in enhanced processing in early visual cortex, providing the neural basis for the efficient identification of relevant features in many everyday tasks. However, many situations require the selection of entire feature ranges instead of just a single feature value, and recent accounts have proposed that broadly tuned attentional templates are often critical for guiding selection in cluttered visual scenes. To assess the neural implementation of such broad tuning of feature-based attention, we here recorded frequency-tagged potentials in human observers (male and female) while participants attended to narrow or broad ranges of colors of spatially intermingled dot fields. Our results show clear increases in the signal strength for the attended colors relative to unattended colors for both narrow and broad color ranges, though this increase was reduced for the broad-range condition, suggesting that limits in the breadth of attentional tuning arise at early processing stages. Overall, the present findings indicate that feature-selective attention can amplify multiple contiguous color values in early visual cortex, shedding light onto the neural mechanisms underlying broad search templates. More generally, they illustrate how feature-based attention can dynamically 'zoom in' and 'zoom out' in feature space, mirroring models of spatial attention.Significance statement Many daily situations require the human brain to focus attention to entire sets of feature values, for example when looking for apples in the supermarket which may range from red to yellow to green. How is such broad selection of perceptually contiguous features accomplished? Using electroencephalography, we directly measured early visual processing while participants attended to different color ranges. Our results demonstrate that processing of entire sets of colors is increased in early visual cortex, though the magnitude of this enhancement is modulated by the selected range. This result is important for our understanding of how attention is allocated in complex visual scenes in which relevant inputs are often variable and not defined by a single feature value.

Read full abstract
  • Journal IconThe Journal of neuroscience : the official journal of the Society for Neuroscience
  • Publication Date IconMay 9, 2025
  • Author Icon Mert Özkan + 2
Just Published Icon Just Published
Cite IconCite
Save

A virtual imaging study of microcalcification detection performance in digital breast tomosynthesis: Patients versus 3D textured phantoms.

Clinical studies to evaluate the performance of new imaging devices require the collection of patient data. Virtual methods present a potential alternative in which patient-simulating phantoms are used instead. This work uses a virtual imaging technique to examine the extent to which human observer microcalcification detection performance in phantom backgrounds matches that in real patient backgrounds for digital breast tomosynthesis (DBT). This work used the following DBT image datasets: (1) 142 real patient images and (2) 20 real images of the physical L1 phantom, both acquired on a GEHC Senographe Pristina system; (3) 217 simulated images of the Stochastic Solid Breast Texture (SSBT) phantom and (4) 217 simulated images of the digital L1 phantom, both created with the CatSim framework. The L1 phantom is a PMMA container filled with water and PMMA spheres of varying diameters. The SSBT phantom is a computational phantom composed of glandular and adipose tissue compartments. Signal-present images were generated by inserting simulated microcalcification clusters, containing individual calcifications with thicknesses and projected areas in the range of 165-180µm, 195-210µm and 225-240µm, and 0.025-0.031mm2, 0.032-0.040mm2, 0.041-0.045mm2 respectively, at random locations into all four background types. Three human observers performed a search/localization task on 120 signal-present and 97 signal-absent volumes of interest (VOIs) per background type. A jackknife alternative free-response receiver operating characteristic (JAFROC) analysis was applied to calculate the area under the curve (AUC). The simulation procedure was first validated by testing the physical and digital L1 background AUC values for equivalence (margin=0.1). The AUC for patient backgrounds and each phantom type (SSBT, physical L1, digital L1) was then compared. Additionally, each patient's VOI was categorized in homogeneous or heterogeneous background texture distribution by an experienced physicist, and by local volumetric breast density (VBD) at the insertion position to examine their effect on correctly detected fraction of microcalcification clusters. Mean AUC for the patient images was 0.70±0.04, while mean AUCs of 0.74±0.04, 0.76±0.03, and 0.76±0.07 were found for the SSBT, physical L1 and digital L1 phantoms, respectively. The AUC for the physical and digital L1 phantoms was equivalent (p=0.03), as well as for the patients and SSBT backgrounds (p=0.002). The physical and digital L1 images did not have equivalent detection performance compared to patient images (p=0.06 and p=0.9, respectively). In patient backgrounds, the correctly detected fraction of microcalcifications clusters fell from 0.53 for the lowest density (VBD<4.5%) to 0.40 for the highest density (VBD≥15.5%). Microcalcification detection fractions were 0.52, 0.55, and 0.55 for the SSBT, physical L1 and digital L1 backgrounds, respectively. Detection levels were equivalent between the physical and digital versions of the L1 phantom. Detection in L1 and patient backgrounds was not equivalent, however, differences in detection performance were small, confirming the potential value of this phantom. The digital SSBT phantom was found to be equivalent to patient backgrounds for DBT studies of microcalcification cluster detection performance, for the DBT system and reconstruction algorithm used in this study.

Read full abstract
  • Journal IconMedical physics
  • Publication Date IconMay 8, 2025
  • Author Icon Katrien Houbrechts + 8
Just Published Icon Just Published
Cite IconCite
Save

Deepfake Detection Using XceptionNet

The rapid rise of synthetic media, especially deepfakes, has sparked major concerns around misinformation, identity fraud, and diminishing public confidence in visual content. As these altered videos grow increasingly realistic, there is a pressing demand for reliable and scalable detection methods. This paper explores the use of the XceptionNet convolutional neural network architecture for deepfake detection. The analysis is based on the FaceForensics++ dataset, which comprises more than 1.8 million manipulated images created with four sophisticated face manipulation methods: NeuralTextures, FaceSwap, Face2Face, and DeepFakes. Cropped facial images are used for binary classification which is a process of differentiating between authentic and fraudulent content. Experimental results; with an accuracy of over 95% on unprocessed, and high-quality videos; over 80% accuracy even when heavily compressed; demonstrate that XceptionNet significantly outperforms both human observers and traditional detection methods, particularly under conditions of image compression. These findings highlight the robustness of deep learning-based models and the critical role of domain-specific preprocessing in improving detection accuracy.

Read full abstract
  • Journal IconInternational Journal of Scientific Research in Science and Technology
  • Publication Date IconMay 4, 2025
  • Author Icon Muskan Kumari + 2
Just Published Icon Just Published
Cite IconCite
Save

A Comparison of Artificial Intelligence and Human Observation in the Assessment of Cattle Handling and Slaughter.

Slaughter facilities use a variety of tools to evaluate animal handling, including but not limited to live audits, the use of remote video auditing, and some AI technologies. The objective of this study was to determine the similarity between AI and human evaluator assessments of critical cattle handling outcomes in a slaughter plant. One hundred twelve video clips of cattle handling and stunning from a slaughter plant in the United Kingdom were collected. The AI identified the presence or absence of: Stunning, Electric Prod Usage, Falling, Pen Crowding, and Questionable Handling Events. Three human evaluators scored the videos for these outcomes. Four different datasets were generated, and Jaccard similarity indices were generated. There was high similarity (JI > 0.90) for Stunning, Electric Prod Usage, and Falls between the evaluators and the AI. There was high consistency (JI > 0.80) for Pen Crowding. There were differences (JI ≥ 0.50) between the humans and the AI when identifying Questionable Animal Handling Events but the AI was adept at identifying events for further review. The implementation of AI to assist with cattle handling in a slaughter facility environment could be an added tool to enhance animal welfare programs.

Read full abstract
  • Journal IconAnimals : an open access journal from MDPI
  • Publication Date IconMay 3, 2025
  • Author Icon Lily Edwards-Callaway + 3
Just Published Icon Just Published
Cite IconCite
Save

A Convolutional Neural Network as a Potential Tool for Camouflage Assessment

Camouflage evaluation is traditionally evaluated through human visual search and detection experiments, which are time-consuming and resource intensive. To address this, we explored whether a pre-trained convolutional neural network (YOLOv4-tiny) can provide an automated, image-based measure of camouflage effectiveness that aligns with human perception. We conducted behavioral experiments to obtain human detection performance metrics—such as search time and target conspicuity—and compared these to the classification probabilities output by the YOLO model when detecting camouflaged individuals in rural and urban scenes. YOLO’s classification probability was adopted as a proxy for detectability, allowing direct comparison with human observer performance. We found a strong overall correspondence between YOLO-predicted camouflage effectiveness and human detection results. However, discrepancies emerged at close distances, where YOLO’s performance was particularly sensitive to high-contrast, shape-breaking elements of the camouflage pattern. CNNs such as YOLO have significant potential for assessing camouflage effectiveness for a wide range of applications, such as evaluating or optimizing one’s signature and predicting optimal hiding locations in each environment. Still, further research is required to fully establish YOLO’s limitations and applicability for this purpose in real time.

Read full abstract
  • Journal IconApplied Sciences
  • Publication Date IconMay 2, 2025
  • Author Icon Erik Van Der Burg + 3
Open Access Icon Open AccessJust Published Icon Just Published
Cite IconCite
Save

Emerging roles for the nucleolus in development and stem cells.

The nucleolus is a membrane-less subnuclear compartment known for its role in ribosome biogenesis. However, emerging evidence suggests that nucleolar function extends beyond ribosome production and is particularly important during mammalian development. Nucleoli are dynamically reprogrammed post-fertilisation: totipotent early mouse embryos display non-canonical, immature nucleolar precursor bodies, and their remodelling to mature nucleoli is essential for the totipotency-to-pluripotency transition. Mounting evidence also links nucleolar disruption to various pathologies, including embryonic lethality in mouse mutants for nucleolar factors, human developmental disorders and observations of nucleolar changes in disease states. As well as its role in ribogenesis, new findings point to the nucleolus as an essential regulator of genome organisation and heterochromatin formation. This Review summarises the varied roles of nucleoli in development, primarily in mammals, highlighting the importance of nucleolar chromatin for genome regulation, and introduces new techniques for exploring nucleolar function.

Read full abstract
  • Journal IconDevelopment (Cambridge, England)
  • Publication Date IconMay 1, 2025
  • Author Icon Bryony J Leeke + 2
Just Published Icon Just Published
Cite IconCite
Save

Deriving WMO Cloud Classes From Ground‐Based RGB Pictures With a Residual Neural Network Ensemble

AbstractClouds of various kinds play a substantial role in a wide variety of atmospheric processes. They are directly linked to the formation of precipitation, and significantly affect the atmospheric energy budget via radiative effects and latent heat. Moreover, knowledge of currently occurring cloud types allows the observer to draw conclusions about the short‐term evolution of the state of the atmosphere and the weather. Therefore, a consistent cloud classification scheme has already been introduced almost 100 years ago. In this work, we train an ensemble of identically initialized multi‐label residual neural network architectures from scratch with ground‐based RGB pictures. Operational human observations, consisting of up to three out of 30 cloud classes per instance, are used as ground truth. To the best of our knowledge, we are the first to classify clouds with this methodology into 30 different classes. Class‐specific resampling is used to reduce prediction biases due to a highly imbalanced ground truth class distribution. Results indicate that the ensemble mean outperforms the best single member in each cloud class. Still, each single member clearly outperforms both random and climatological predictions. Attributes diagrams indicate underconfidence in heavily augmented classes and very good calibration in all other classes. Autonomy and output consistency are the main advantages of such a trained classifier, hence we consider operational cloud monitoring as main application. Either for consistent cloud class observations or to observe the current state of the weather and its short time evolution with high temporal resolution, for example, in proximity of solar power plants.

Read full abstract
  • Journal IconEarth and Space Science
  • Publication Date IconMay 1, 2025
  • Author Icon Markus Rosenberger + 2
Just Published Icon Just Published
Cite IconCite
Save

Deep learning-based automated segmentation of cardiac real-time MRI in non-human primates.

Deep learning-based automated segmentation of cardiac real-time MRI in non-human primates.

Read full abstract
  • Journal IconComputers in biology and medicine
  • Publication Date IconMay 1, 2025
  • Author Icon Majid Ramedani + 3
Just Published Icon Just Published
Cite IconCite
Save

Can AI-assisted objective facial attractiveness scoring systems replace manual aesthetic evaluations? A comparative analysis of human and machine ratings.

Can AI-assisted objective facial attractiveness scoring systems replace manual aesthetic evaluations? A comparative analysis of human and machine ratings.

Read full abstract
  • Journal IconJournal of plastic, reconstructive & aesthetic surgery : JPRAS
  • Publication Date IconMay 1, 2025
  • Author Icon Ben Wang + 2
Just Published Icon Just Published
Cite IconCite
Save

Clinical Application of Deep Learning-Assisted Needles Reconstruction in Prostate Ultrasound Brachytherapy.

Clinical Application of Deep Learning-Assisted Needles Reconstruction in Prostate Ultrasound Brachytherapy.

Read full abstract
  • Journal IconInternational journal of radiation oncology, biology, physics
  • Publication Date IconMay 1, 2025
  • Author Icon Mathieu Goulet + 5
Just Published Icon Just Published
Cite IconCite
Save

Reverse Correlation of Natural Statistics for Ecologically-Relevant Characterization of Human Perceptual Templates.

Psychophysical reverse correlation is an established technique for retrieving perceptual templates. Its application is best suited to a scenario in which 1) the human observer operates as a template matcher, and 2) the perceptual system is probed using radially symmetric noise, such as Gaussian white noise. When both conditions apply, the resulting estimate of the perceptual template directly reflects the actual template engaged by observers. However, when either condition fails, template estimates can be highly distorted to the point of becoming uninterpretable. This limitation is particularly relevant when ecological validity is under consideration, because natural signals are clearly nothing like white noise. Template distortions associated with natural statistics may be corrected using a number of methods, many of which have been tested in single neurons, but none of which has been tested in human observers. We studied the applicability (or lack thereof) of five such methods to multiple experimental conditions under which the human visual system approaches a template matcher to different degrees of approximation. We find that methods based on minimizing/maximizing loss/information, such as logistic regression and maximally informative dimensions, outperform other approaches under the conditions of our experiments, and therefore represent promising tools for the retrieval of human perceptual templates under ecologically valid conditions. However, we also identify plausible scenarios under which those same approaches produce misleading outcomes, urging caution when interpreting results from those and related methods.

Read full abstract
  • Journal IconJournal of neurophysiology
  • Publication Date IconApr 23, 2025
  • Author Icon Lorenzo Landolfi + 1
Just Published Icon Just Published
Cite IconCite
Save

Multitask Deep Learning for Automated Detection of Endoleak at Digital Subtraction Angiography during Endovascular Aneurysm Repair.

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. Purpose To develop and evaluate a novel multitask deep learning framework for automated detection and localization of endoleaks at aortic digital subtraction angiography (DSA) performed during real-world endovascular aneurysm repair (EVAR) procedures for abdominal aortic aneurysm. Materials and Methods This retrospective study analyzed intraoperative aortic DSA images from EVAR patients (January 2017-December 2021). An expert panel assessed each sequence for endoleaks. Each sequence was processed into three input channels: peak density (PD), time to peak (TTP), and area under the time-density curve (AUC-TD), generating three 2D perfusion maps per patient. These maps served as input into a convolutional neural network (CNN) for binary detection (classification) and localization (regression) of endoleaks through multitask learning. Fivefold cross-validation was performed, with patients split 80:20 into training/testing for each fold. Performance metrics included AUC, F1 score, precision, recall and were compared with human experts. Results The study included 220 patients (181 male; median age, 74 years; IQR, 68-79 years). Endoleaks were visible in 111 out of 220 (50.5%) patients. The model identified and localized endoleaks with an AUC of 0.85 (SD 0.0031), F1 score of 0.78 (SD 0.21), 95% precision, and 73% recall. Compared with the procedural team (94% precision, 63% recall), it had higher values in both metrics, with an F1-score within the human observer range (0.75-0.85). Balancing regression and classification by multitask learning delivered optimal results. The interobserver agreement among human experts was moderate (Fleiss' Kappa = 0.404). Conclusion A novel, fully automated deep learning method accurately detected and localized endoleaks on DSA imaging from EVAR procedures. ©RSNA, 2025.

Read full abstract
  • Journal IconRadiology. Artificial intelligence
  • Publication Date IconApr 23, 2025
  • Author Icon Stefan P M Smorenburg + 3
Just Published Icon Just Published
Cite IconCite
Save

Abstract 2535: Uncovering bias in AI-based digital pathology analysis

Abstract As the most transformative technology in a generation, Artificial Intelligence (AI) is a tool that brings tremendous promise and significant risks, particularly in regard to healthcare. Our project aims to address two challenges faced by researchers utilizing AI in the analysis of immunohistochemical (IHC) images: the potential for covert bias and the opaque nature of “black box” systems. We have trained an AI-driven model to detect tumor and stromal regions in non-small cell lung cancer (NSCLC) tissues stained with a variety of IHC markers, including the histone methyltransferase EZH2, the histone mark H3K27me3, the antigen presentation components B2M, HLA-DR, DQ, DP, the immune evasion molecular PD-L1 and the enzyme CBS. By measuring stain intensity in each region, we observed that EZH2 and PD-L1 staining are significantly higher in squamous cell carcinoma tumor cells than in adenocarcinoma tumor cells, and that HLA-DR, DQ, DP has an inverse relationship with EZH2. These data implicate a reason why some squamous cell carcinomas fail anti-PD1 immunotherapy. However, to ensure that the system was not creating or perpetuating existing disparities, we next endeavored to assess the algorithm’s accuracy across multiple demographic bounds. Using the Cohen’s Kappa of Inter-Rater Reliability, we were able to determine that our algorithm has near perfect agreement with trained human observers in each of the six stains tested (EZH2, H3K27me3, B2M, HLA-DR, DQ, DP, PD-L1, and CBS). This was true regardless of race (Black vs Caucasian) or geographical origin (Appalachian Kentucky vs Non-Appalachian Kentucky). However, when comparing performance by gender, one category fell below the threshold for near perfect agreement (CBS in males). Next, we sought to explore what factors might drive the AI’s decision-making process by comparing its accuracy in various histologic contexts. We analyzed the size of the nuclei, the number of nuclei per mm2, the degree of immune infiltration, the width of the stroma, and the degree of randomness in the arrangement of the tumor and stroma. While the size of the nuclei was not found to have a significant impact on accuracy, the other variables did have the potential to “confuse” the AI, which could lead to less reliable results. By tackling these challenges head-on, we hope that our findings will help set a standard for the equitable and transparent application of artificial intelligence in healthcare. Work supported by Markey STRONG Scholars Program through the American Cancer Society IRG-22-152-34 (EMS), AIML Hub pilot (EMS), T32 CA165990 (DRP), R01 CA237643 (CFB), R01 HL170193 (CFB), P30 CA177558 (Markey Shared Resources) Citation Format: Erika M. Skaggs, Daniel R. Plaugher, Sally R. Ellingson, Christine F. Brainson. Uncovering bias in AI-based digital pathology analysis [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2025; Part 1 (Regular Abstracts); 2025 Apr 25-30; Chicago, IL. Philadelphia (PA): AACR; Cancer Res 2025;85(8_Suppl_1):Abstract nr 2535.

Read full abstract
  • Journal IconCancer Research
  • Publication Date IconApr 21, 2025
  • Author Icon Erika M Skaggs + 3
Just Published Icon Just Published
Cite IconCite
Save

Adjuvant VaccInation After Conization for the Treatment for CervicAL Dysplasia.

This study aimed to evaluate the role of adjuvant HPV vaccination in women undergoing conization for cervical intraepithelial neoplasia. This prospective study assessed factors influencing recurrence in patients undergoing conization for high-grade cervical dysplasia. After conization, patients were counseled on the potential benefits of vaccination. We compared outcomes between two groups: women who underwent conization with adjuvant human papillomavirus (HPV) vaccination and observation versus conization with observation only. Data from 281 patients were analyzed, comprising 168 (59.8%) patients in the conization-only group and 113 (40.2%) patients in the conization-plus vaccination group. Vaccinated patients were younger than nonvaccinated patients (38 vs. 45 years, P < 0.001). Positive surgical margins were more frequently observed in the vaccinated group compared with the nonvaccinated group (9.7 vs. 3.6%; P = 0.038). Median follow-up was shorter in the vaccinated group, although this difference was not statistically significant (24.9 vs. 27.8 months; P = 0.395). The risk of developing HPV-related lesions was similar between the vaccinated and nonvaccinated groups (P = 0.594, log-rank test). Likewise, the need for reconization did not differ significantly between the groups (P = 0.593, log-rank test). Multivariate analysis showed no significant impact of HPV vaccination on postoperative outcomes [hazard ratio (HR): 0.50, 95% confidence interval (CI): 0.15-1.68) for any lesion; HR: 0.90, 95% CI: 0.47-1.73 for reconization]. This study indicates that adjuvant HPV vaccination does not significantly affect short-term outcomes in women undergoing conization for cervical dysplasia. Ongoing randomized trials will provide more robust evidence to clarify the role of adjuvant vaccination in this setting.

Read full abstract
  • Journal IconEuropean journal of cancer prevention : the official journal of the European Cancer Prevention Organisation (ECP)
  • Publication Date IconApr 18, 2025
  • Author Icon Carlotta Caia + 13
Cite IconCite
Save

Visual pleasantness and unpleasantness of natural surfaces.

Visual pleasantness and unpleasantness of natural surfaces.

Read full abstract
  • Journal IconVision research
  • Publication Date IconApr 18, 2025
  • Author Icon Narumi Ogawa + 3
Cite IconCite
Save

Perceptual learning improves discrimination but does not reduce distortions in appearance.

Human perceptual sensitivity often improves with training, a phenomenon known as "perceptual learning." Another important perceptual dimension is appearance, the subjective sense of stimulus magnitude. Are training-induced improvements in sensitivity accompanied by more accurate appearance? Here, we examined this question by measuring both discrimination (sensitivity) and estimation (appearance) responses to near-horizontal motion directions, which are known to be repulsed away from horizontal. Participants performed discrimination and estimation tasks before and after training in either the discrimination or the estimation task or none (control group). Human observers who trained in either discrimination or estimation exhibited improvements in discrimination accuracy, but estimation repulsion did not decrease; instead, it either persisted or increased. Hence, distortions in perception can be exacerbated after perceptual learning. We developed a computational observer model in which perceptual learning arises from increases in the precision of underlying neural representations, which explains this counterintuitive finding. For each observer, the fitted model accounted for discrimination performance, the distribution of estimates, and their changes with training. Our empirical findings and modeling suggest that learning enhances distinctions between categories, a potentially important aspect of real-world perception and perceptual learning.

Read full abstract
  • Journal IconPLoS computational biology
  • Publication Date IconApr 15, 2025
  • Author Icon Sarit F.A Szpiro + 3
Open Access Icon Open Access
Cite IconCite
Save

How distinct sources of nuisance variability in natural images and scenes limit human stereopsis.

Stimulus variability-a form of nuisance variability-is a primary source of perceptual uncertainty in everyday natural tasks. How do different properties of natural images and scenes contribute to this uncertainty? Using binocular disparity as a model system, we report a systematic investigation of how various forms of natural stimulus variability impact performance in a stereo-depth discrimination task. With stimuli sampled from a stereo-image database of real-world scenes having pixel-by-pixel ground-truth distance data, three human observers completed two closely related double-pass psychophysical experiments. In the two experiments, each human observer responded twice to ten thousand unique trials, in which twenty thousand unique stimuli were presented. New analytical methods reveal, from this data, the specific and nearly dissociable effects of two distinct sources of natural stimulus variability-variation in luminance-contrast patterns and variation in local-depth structure-on discrimination performance, as well as the relative importance of stimulus-driven-variability and internal-noise in determining performance limits. Between-observer analyses show that both stimulus-driven sources of uncertainty are responsible for a large proportion of total variance, have strikingly similar effects on different people, and-surprisingly-make stimulus-by-stimulus responses more predictable (not less). The consistency across observers raises the intriguing prospect that image-computable models can make reasonably accurate performance predictions in natural viewing. Overall, the findings provide a rich picture of stimulus factors that contribute to human perceptual performance in natural scenes. The approach should have broad application to other animal models and other sensory-perceptual tasks with natural or naturalistic stimuli.

Read full abstract
  • Journal IconPLoS computational biology
  • Publication Date IconApr 15, 2025
  • Author Icon David N White + 1
Cite IconCite
Save

Brain-guided convolutional neural networks reveal task-specific representations in scene processing

Scene categorization is the dominant proxy for visual understanding, yet humans can perform a large number of visual tasks within any scene. Consequently, we know little about how different tasks change how a scene is processed, represented, and its features ultimately used. Here, we developed a novel brain-guided convolutional neural network (CNN) where each convolutional layer was separately guided by neural responses taken at different time points while observers performed a pre-cued object detection task or a scene affordance task on the same set of images. We then reconstructed each layer’s activation maps via deconvolution to spatially assess how different features were used within each task. The brain-guided CNN made use of image features that human observers identified as being crucial to complete each task starting around 244 ms and persisted to 402 ms. Critically, because the same images were used across the two tasks, the CNN could only succeed if the neural data captured task-relevant differences. Our analyses of the activation maps across layers revealed that the brain’s spatiotemporal representation of local image features evolves systematically over time. This underscores how distinct image features emerge at different stages of processing, shaped by the observer’s goals and behavioral context.

Read full abstract
  • Journal IconScientific Reports
  • Publication Date IconApr 15, 2025
  • Author Icon Bruce C Hansen + 5
Cite IconCite
Save

Model observer task-based assessment of computed tomography metal artifact reduction using a hip arthroplasty phantom.

The United States Food and Drug Administration (FDA) recently published a model observer-based framework for the objective performance assessment of computed tomography (CT) metal artifact reduction (MAR) algorithms and demonstrated the framework's feasibility in the low-contrast detectability (LCD) task-based assessment of MAR performance in a mathematical phantom. This study investigates the feasibility of the model observer-based framework in LCD task-based assessment of MAR performance using a physical arthroplasty phantom, results of which were then compared with the performance of human observers. A phantom simulating a unilateral hip prosthesis was designed with a rotatable insert containing a metal implant (cobalt-chromium spheres attached to titanium rods) and 16 unique low-contrast spherical lesions. Each lesion was scanned 100 times on a CT scanner (Somatom Force, Siemens Healthineers) with standard full-dose and half-dose protocols (140kVp, 300 and 150 quality reference mAs) in each of four different insert rotations to supply 100 pairs of signal-present (lesion) and signal-absent (background) images needed for model observer analyses. Lesion detectability (d') using channelized Hotelling observers (CHO) was optimized by testing different image transformation techniques and channel selection (Gabor and Laguerre-Gauss [LG]) and calculated for each lesion reconstructed with and without iterative MAR (iMAR, Siemens Healthineers). Linear regression was used to assess the d' in each image set. Spearman's correlation was used to compare d' results to human detectability and confidence scores from a previously published human observer study involving the same phantom. CHO d' measurements using LG channels were less sensitive to artifacts than those using Gabor channels and were therefore selected for the LCD assessment. Image masking and thresholding provided more accurate d' by isolating the signal and minimizing background differences. For all lesions, d' values of full-dose iMAR images were significantly greater than those of filtered back projection (FBP) images at full dose (p<0.001) and half dose (p<0.001). Additionally, d' values of half-dose iMAR images were significantly greater than those of FBP images at full dose (p=0.010) and half dose (p<0.001). The d' values were not significantly different between full-dose and half-dose FBP (p=0.620) or between full-dose and half-dose iMAR (p=0.358). Pooling across all lesions, d' measurements were positively correlated with human detection rate (Spearman correlation coefficient=0.723; p<0.001) and confidence scores (Spearman correlation coefficient=0.727; p<0.001). The use of CHO in the LCD assessment of MAR performance can be feasibly performed on a physical phantom, and results using this method correlated well with findings from human observers.

Read full abstract
  • Journal IconMedical physics
  • Publication Date IconApr 12, 2025
  • Author Icon Grant Fong + 6
Cite IconCite
Save

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • .
  • .
  • .
  • 10
  • 1
  • 2
  • 3
  • 4
  • 5

Popular topics

  • Latest Artificial Intelligence papers
  • Latest Nursing papers
  • Latest Psychology Research papers
  • Latest Sociology Research papers
  • Latest Business Research papers
  • Latest Marketing Research papers
  • Latest Social Research papers
  • Latest Education Research papers
  • Latest Accounting Research papers
  • Latest Mental Health papers
  • Latest Economics papers
  • Latest Education Research papers
  • Latest Climate Change Research papers
  • Latest Mathematics Research papers

Most cited papers

  • Most cited Artificial Intelligence papers
  • Most cited Nursing papers
  • Most cited Psychology Research papers
  • Most cited Sociology Research papers
  • Most cited Business Research papers
  • Most cited Marketing Research papers
  • Most cited Social Research papers
  • Most cited Education Research papers
  • Most cited Accounting Research papers
  • Most cited Mental Health papers
  • Most cited Economics papers
  • Most cited Education Research papers
  • Most cited Climate Change Research papers
  • Most cited Mathematics Research papers

Latest papers from journals

  • Scientific Reports latest papers
  • PLOS ONE latest papers
  • Journal of Clinical Oncology latest papers
  • Nature Communications latest papers
  • BMC Geriatrics latest papers
  • Science of The Total Environment latest papers
  • Medical Physics latest papers
  • Cureus latest papers
  • Cancer Research latest papers
  • Chemosphere latest papers
  • International Journal of Advanced Research in Science latest papers
  • Communication and Technology latest papers

Latest papers from institutions

  • Latest research from French National Centre for Scientific Research
  • Latest research from Chinese Academy of Sciences
  • Latest research from Harvard University
  • Latest research from University of Toronto
  • Latest research from University of Michigan
  • Latest research from University College London
  • Latest research from Stanford University
  • Latest research from The University of Tokyo
  • Latest research from Johns Hopkins University
  • Latest research from University of Washington
  • Latest research from University of Oxford
  • Latest research from University of Cambridge

Popular Collections

  • Research on Reduced Inequalities
  • Research on No Poverty
  • Research on Gender Equality
  • Research on Peace Justice & Strong Institutions
  • Research on Affordable & Clean Energy
  • Research on Quality Education
  • Research on Clean Water & Sanitation
  • Research on COVID-19
  • Research on Monkeypox
  • Research on Medical Specialties
  • Research on Climate Justice
Discovery logo
FacebookTwitterLinkedinInstagram

Download the FREE App

  • Play store Link
  • App store Link
  • Scan QR code to download FREE App

    Scan to download FREE App

  • Google PlayApp Store
FacebookTwitterTwitterInstagram
  • Universities & Institutions
  • Publishers
  • R Discovery PrimeNew
  • Ask R Discovery
  • Blog
  • Accessibility
  • Topics
  • Journals
  • Open Access Papers
  • Year-wise Publications
  • Recently published papers
  • Pre prints
  • Questions
  • FAQs
  • Contact us
Lead the way for us

Your insights are needed to transform us into a better research content provider for researchers.

Share your feedback here.

FacebookTwitterLinkedinInstagram
Cactus Communications logo

Copyright 2025 Cactus Communications. All rights reserved.

Privacy PolicyCookies PolicyTerms of UseCareers