Commentary on Friedman et al. (2024): A General Preference for Complexity?

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

It is typically assumed in the empirical aesthetics literature that generalizable abstract stimulus attributes like familiarity, fluency, and complexity drive preferences. This general nature means that they can, at least in principle, apply to any stimulus regardless of its characteristics and sensory modality. However, most studies in this tradition are restricted to group-level trends and particular stimulus properties. Therefore, they say nothing about amodal or general preferences for particular levels of such abstract attributes independently from their characterization at the individual level. Moreover, the hypothesis of a general, amodal preference for attributes like complexity was not empirically supported and only scarcely tested until we provided empirical evidence against it in our Clemente et al. (2021) study. In their quest for empirical evidence in favor of a preference for complexity across the auditory and visual modalities, Friedman et al. (2024) made two central claims: First, they found it surprising that aesthetic sensitivity for visual and musical complexity did not correlate in our study. Second, they expressed concerns about the comparability of the musical and visual stimuli we used. In this commentary, I show how these claims and the premises on which they rely are debatable and how the results of Friedman et al. (2024) support our conclusion that stimulus information rather than abstract attributes like complexity drive evaluative judgments such as liking.

Similar Papers
  • Research Article
  • Cite Count Icon 15
  • 10.1016/j.neuropsychologia.2018.08.014
A common representation of time across visual and auditory modalities
  • Aug 22, 2018
  • Neuropsychologia
  • Louise C Barne + 5 more

A common representation of time across visual and auditory modalities

  • Research Article
  • Cite Count Icon 38
  • 10.1121/1.2405859
Integration efficiency for speech perception within and across sensory modalities by normal-hearing and hearing-impaired individuals
  • Feb 1, 2007
  • The Journal of the Acoustical Society of America
  • Ken W Grant + 2 more

In face-to-face speech communication, the listener extracts and integrates information from the acoustic and optic speech signals. Integration occurs within the auditory modality (i.e., across the acoustic frequency spectrum) and across sensory modalities (i.e., across the acoustic and optic signals). The difficulties experienced by some hearing-impaired listeners in understanding speech could be attributed to losses in the extraction of speech information, the integration of speech cues, or both. The present study evaluated the ability of normal-hearing and hearing-impaired listeners to integrate speech information within and across sensory modalities in order to determine the degree to which integration efficiency may be a factor in the performance of hearing-impaired listeners. Auditory-visual nonsense syllables consisting of eighteen medial consonants surrounded by the vowel [a] were processed into four nonoverlapping acoustic filter bands between 300 and 6000 Hz. A variety of one, two, three, and four filter-band combinations were presented for identification in auditory-only and auditory-visual conditions: A visual-only condition was also included. Integration efficiency was evaluated using a model of optimal integration. Results showed that normal-hearing and hearing-impaired listeners integrated information across the auditory and visual sensory modalities with a high degree of efficiency, independent of differences in auditory capabilities. However, across-frequency integration for auditory-only input was less efficient for hearing-impaired listeners. These individuals exhibited particular difficulty extracting information from the highest frequency band (4762-6000 Hz) when speech information was presented concurrently in the next lower-frequency band (1890-2381 Hz). Results suggest that integration of speech information within the auditory modality, but not across auditory and visual modalities, affects speech understanding in hearing-impaired listeners.

  • Research Article
  • 10.1111/psyp.70099
Early and Late ERP Correlates of Conscivousness- A Direct Comparison Between Visual and Auditory Modalities.
  • Jul 1, 2025
  • Psychophysiology
  • Kinga Ciupińska + 3 more

The majority of previous research on neural correlates of consciousness (NCC) have used the visual system as a model. However, to what extent reported findings generalize to other sensory modalities has not been comprehensively investigated. To fill this gap we directly compared visual and auditory NCCs by testing the same group of participants with two analogous procedures. Participants were presented with near-threshold visual and auditory stimuli followed by a detection task and Perceptual Awareness Scale (PAS). On the behavioral level, as expected from visual awareness studies, PAS ratings were highly correlated with accuracy in the detection task. Analysis of EEG data revealed that analogous ERP components-early visual or auditory awareness negativity (VAN and AAN) were related to perceptual awareness, but regarding late positivity (LP), it was related to perceptual awareness only in the visual modality. Further, we found that VAN and visual LP exhibited shorter latencies than respective auditory components suggesting earlier access of visual stimuli to consciousness, compared to auditory ones. Finally, neither estimated perceptual thresholds nor amplitudes and latencies of the awareness-related ERPs components were correlated between modalities, suggesting a lack of a close link between visual and auditory perceptual mechanisms. Therefore, the observed differences between visual and auditory modalities indicate the investigated NCC are rather modality-specific, and thus that neither of the proposed measures track consciousness independently to the content-related processing.

  • Research Article
  • Cite Count Icon 74
  • 10.1007/s10071-004-0207-1
Perceptual biases for multimodal cues in chimpanzee (Pan troglodytes) affect recognition.
  • Mar 2, 2004
  • Animal Cognition
  • Lisaa Parr

The ability of organisms to discriminate social signals, such as affective displays, using different sensory modalities is important for social communication. However, a major problem for understanding the evolution and integration of multimodal signals is determining how humans and animals attend to different sensory modalities, and these different modalities contribute to the perception and categorization of social signals. Using a matching-to-sample procedure, chimpanzees discriminated videos of conspecifics' facial expressions that contained only auditory or only visual cues by selecting one of two facial expression photographs that matched the expression category represented by the sample. Other videos were edited to contain incongruent sensory cues, i.e., visual features of one expression but auditory features of another. In these cases, subjects were free to select the expression that matched either the auditory or visual modality, whichever was more salient for that expression type. Results showed that chimpanzees were able to discriminate facial expressions using only auditory or visual cues, and when these modalities were mixed. However, in these latter trials, depending on the expression category, clear preferences for either the visual or auditory modality emerged. Pant-hoots and play faces were discriminated preferentially using the auditory modality, while screams were discriminated preferentially using the visual modality. Therefore, depending on the type of expressive display, the auditory and visual modalities were differentially salient in ways that appear consistent with the ethological importance of that display's social function.

  • Research Article
  • Cite Count Icon 1
  • 10.3390/brainsci12111530
Action Postponing and Restraint Varies among Sensory Modalities
  • Nov 11, 2022
  • Brain Sciences
  • Koyuki Ikarashi + 4 more

Proactive inhibition is divided into two components: action postponing (AP), which refers to slowing the onset of response, and action restraint (AR), which refers to preventing the response. To date, several studies have reported alterations in proactive inhibition and its associated neural processing among sensory modalities; however, this remains inconclusive owing to several methodological issues. This study aimed to clarify the differences in AP and AR and their neural processing among visual, auditory, and somatosensory modalities using an appropriate experimental paradigm that can assess AP and AR separately. The postponing time calculated by subtracting simple reaction time from Go signal reaction time was shorter in the visual modality than in the other modalities. This was explained by faster neural processing for conflict monitoring induced by anticipating the presence of the No-go signal, supported by the shorter latency of AP-related N2. Furthermore, the percentage of false alarms, which is the reaction to No-go signals, was lower in the visual modality than in the auditory modality. This was attributed to higher neural resources for conflict monitoring induced by the presence of No-go signals, supported by the larger amplitudes of AR-related N2. Our findings revealed the differences in AP and AR and their neural processing among sensory modalities.

  • Research Article
  • Cite Count Icon 2
  • 10.2478/psicolj-2019-0005
Cognitive entrainment to isochronous rhythms is independent of both sensory modality and top-down attention
  • Jul 1, 2019
  • Psicológica Journal
  • Diana Cutanda + 2 more

The anisochrony of a stimulus sequence was manipulated parametrically to investigate whether rhythmic entrainment is stronger in the auditory modality than in the visual modality (Experiment 1), and whether it relies on top-down attention (Experiment 2). In Experiment 1, participants had to respond as quickly as possible to a target presented after a sequence of either visual or auditory stimuli. The anisochrony of this sequence was manipulated parametrically, rather than in an all or none fashion; that is, it could range from smaller to larger deviations of the isochrony (0, 10, 20, 50, 100, 150 and 200 ms). We compared rhythmic entrainment patterns for auditory and visual modalities. Results showed a peak of entrainment for both isochrony and deviations of isochrony up to 50 ms (i.e., participants were equally fast both after the isochronous sequences and after 10, 20 and 50 ms deviations), suggesting that anisochronous sequences can also produce entrainment. Beyond this entrainment window, the reaction times became progressively slower. Surprisingly, no differences were found between the entrainment patterns for auditory and visual rhythms. In Experiment 2, we used a dual-task methodology by adding a working memory n-back task to the procedure of Experiment 1. Results did not show interference of the secondary task in either auditory or visual modalities, with participants showing the same entrainment pattern as in Experiment 1. These results suggest that rhythmic entrainment constitutes a cognitive process that occurs by default (automatically), regardless of the modality in which the stimuli are presented, and independent of top-down attention, to generate behavioural benefits.

  • Research Article
  • Cite Count Icon 6
  • 10.1155/2021/4158580
Neurophysiological Verbal Working Memory Patterns in Children: Searching for a Benchmark of Modality Differences in Audio/Video Stimuli Processing
  • Jan 1, 2021
  • Computational Intelligence and Neuroscience
  • Bianca Maria Serena Inguscio + 9 more

Exploration of specific brain areas involved in verbal working memory (VWM) is a powerful but not widely used tool for the study of different sensory modalities, especially in children. In this study, for the first time, we used electroencephalography (EEG) to investigate neurophysiological similarities and differences in response to the same verbal stimuli, expressed in the auditory and visual modality during the n-back task with varying memory load in children. Since VWM plays an important role in learning ability, we wanted to investigate whether children elaborated the verbal input from auditory and visual stimuli through the same neural patterns and if performance varies depending on the sensory modality. Performance in terms of reaction times was better in visual than auditory modality (p = 0.008) and worse as memory load increased regardless of the modality (p < 0.001). EEG activation was proportionally influenced by task level and was evidenced in theta band over the prefrontal cortex (p = 0.021), along the midline (p = 0.003), and on the left hemisphere (p = 0.003). Differences in the effects of the two modalities were seen only in gamma band in the parietal cortices (p = 0.009). The values of a brainwave-based engagement index, innovatively used here to test children in a dual-modality VWM paradigm, varied depending on n-back task level (p = 0.001) and negatively correlated (p = 0.002) with performance, suggesting its computational effectiveness in detecting changes in mental state during memory tasks involving children. Overall, our findings suggest that auditory and visual VWM involved the same brain cortical areas (frontal, parietal, occipital, and midline) and that the significant differences in cortical activation in theta band were more related to memory load than sensory modality, suggesting that VWM function in the child's brain involves a cross-modal processing pattern.

  • Dissertation
  • 10.53846/goediss-6005
The Influence of Emotional Content on Event-Related Brain Potentials during Spoken Word Processing
  • Feb 21, 2022
  • Annika Graß

The Influence of Emotional Content on Event-Related Brain Potentials during Spoken Word Processing

  • Research Article
  • Cite Count Icon 111
  • 10.1016/j.cub.2008.05.043
Integration of Bimodal Looming Signals through Neuronal Coherence in the Temporal Lobe
  • Jun 26, 2008
  • Current Biology
  • Joost X Maier + 2 more

Integration of Bimodal Looming Signals through Neuronal Coherence in the Temporal Lobe

  • Research Article
  • Cite Count Icon 17
  • 10.1016/j.ijporl.2017.06.010
Auditory, visual and auditory-visual memory and sequencing performance in typically developing children
  • Jun 15, 2017
  • International Journal of Pediatric Otorhinolaryngology
  • Roshni Pillai + 1 more

Auditory, visual and auditory-visual memory and sequencing performance in typically developing children

  • Research Article
  • 10.1027/1618-3169/a000487
Can You See What I Hear?
  • Mar 1, 2020
  • Experimental Psychology
  • Anna Conci + 2 more

Previous research on inattentional blindness (IB) has focused almost entirely on the visual modality. This study extends the paradigm by pairing visual with auditory stimuli. New visual and auditory stimuli were created to investigate the phenomenon of inattention in visual, auditory, and paired modality. The goal of the study was to assess to what extent the pairing of visual and auditory modality fosters the detection of change. Participants watched a video sequence and counted predetermined words in a spoken text. IB and inattentional deafness occurred in about 40% of participants when attention was engaged by this difficult (auditory) counting task. Most importantly, participants detected the changes considerably more often (88%) when the change occurred in both modalities rather than just one. One possible reason for the drastic reduction of IB or deafness in a multimodal context is that discrepancy between expected and encountered course of events proportionally increases across sensory modalities.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 7
  • 10.3389/fpsyg.2016.00535
Odors Bias Time Perception in Visual and Auditory Modalities
  • Apr 22, 2016
  • Frontiers in Psychology
  • Zhenzhu Yue + 3 more

Previous studies have shown that emotional states alter our perception of time. However, attention, which is modulated by a number of factors, such as emotional events, also influences time perception. To exclude potential attentional effects associated with emotional events, various types of odors (inducing different levels of emotional arousal) were used to explore whether olfactory events modulated time perception differently in visual and auditory modalities. Participants were shown either a visual dot or heard a continuous tone for 1000 or 4000 ms while they were exposed to odors of jasmine, lavender, or garlic. Participants then reproduced the temporal durations of the preceding visual or auditory stimuli by pressing the spacebar twice. Their reproduced durations were compared to those in the control condition (without odor). The results showed that participants produced significantly longer time intervals in the lavender condition than in the jasmine or garlic conditions. The overall influence of odor on time perception was equivalent for both visual and auditory modalities. The analysis of the interaction effect showed that participants produced longer durations than the actual duration in the short interval condition, but they produced shorter durations in the long interval condition. The effect sizes were larger for the auditory modality than those for the visual modality. Moreover, by comparing performance across the initial and the final blocks of the experiment, we found odor adaptation effects were mainly manifested as longer reproductions for the short time interval later in the adaptation phase, and there was a larger effect size in the auditory modality. In summary, the present results indicate that odors imposed differential impacts on reproduced time durations, and they were constrained by different sensory modalities, valence of the emotional events, and target durations. Biases in time perception could be accounted for by a framework of attentional deployment between the inducers (odors) and emotionally neutral stimuli (visual dots and sound beeps).

  • Research Article
  • Cite Count Icon 4
  • 10.1080/10888438.2023.2166413
A Cross-Modal Investigation of Statistical Learning in Developmental Dyslexia
  • Mar 4, 2023
  • Scientific Studies of Reading
  • Nitzan Kligler + 1 more

Structural patterns existing in language can be exploited for implicit prediction of sequences in speech and visual input via a process termed statistical learning (SL). Despite extensive examination of SL in dyslexia, whether SL problems arise from modality-constrained learning processes or from global learning processes is still unknown, nor is it clear how SL can be supported. Purpose The present study used the triplet paradigm to explore SL among young adults with dyslexia and among typical readers across auditory and visual modalities and tested whether information from one sensory modality can assist SL in a different sensory modality. Method Participants performed auditory and visual SL tasks under conditions in which a consistent visual/auditory cue respectively accompanied the auditory/visual triplets or under conditions in which no cross-modal information was presented. Results SL performance was poorer in the dyslexia group than among typical readers across visual and auditory modalities. Furthermore, both groups improved their SL abilities under conditions in which cues were consistent with triplet boundaries compared to under conditions lacking cross-modal information Conclusions These findings suggest that SL impairments observed in dyslexia stem from a domain-general deficiency and that cross-modal information can be recruited to support SL in dyslexia.

  • Research Article
  • Cite Count Icon 129
  • 10.1037/a0020731
The role of sensory modality in age-related distraction: A critical review and a renewed view.
  • Nov 1, 2010
  • Psychological Bulletin
  • Maria J S Guerreiro + 2 more

Selective attention requires the ability to focus on relevant information and to ignore irrelevant information. The ability to inhibit irrelevant information has been proposed to be the main source of age-related cognitive change (e.g., Hasher & Zacks, 1988). Although age-related distraction by irrelevant information has been extensively demonstrated in the visual modality, studies involving auditory and cross-modal paradigms have revealed a mixed pattern of results. A comparative evaluation of these paradigms according to sensory modality suggests a twofold trend: Age-related distraction is more likely (a) in unimodal than in cross-modal paradigms and (b) when irrelevant information is presented in the visual modality, rather than in the auditory modality. This distinct pattern of age-related changes in selective attention may be linked to the reliance of the visual and auditory modalities on different filtering mechanisms. Distractors presented through the auditory modality can be filtered at both central and peripheral neurocognitive levels. In contrast, distractors presented through the visual modality are primarily suppressed at more central levels of processing, which may be more vulnerable to aging. We propose the hypothesis that age-related distractibility is modality dependent, a notion that might need to be incorporated in current theories of cognitive aging. Ultimately, this might lead to a more accurate account for the mixed pattern of impaired and preserved selective attention found in advancing age.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 1
  • 10.3389/fnhum.2021.725449
Weighted Integration of Duration Information Across Visual and Auditory Modality Is Influenced by Modality-Specific Attention.
  • Oct 7, 2021
  • Frontiers in human neuroscience
  • Hiroshi Yoshimatsu + 1 more

We constantly integrate multiple types of information from different sensory modalities. Generally, such integration is influenced by the modality that we attend to. However, for duration perception, it has been shown that when duration information from visual and auditory modalities is integrated, the perceived duration of the visual stimulus leaned toward the duration of the auditory stimulus, irrespective of which modality was attended. In these studies, auditory dominance was assessed using visual and auditory stimuli with different durations whose timing of onset and offset would affect perception. In the present study, we aimed to investigate the effect of attention on duration integration using visual and auditory stimuli of the same duration. Since the duration of a visual flicker and auditory flutter tends to be perceived as longer than and shorter than its physical duration, respectively, we used the 10 Hz visual flicker and auditory flutter with the same onset and offset timings but different perceived durations. The participants were asked to attend either visual, auditory, or both modalities. Contrary to the attention-independent auditory dominance reported in previous studies, we found that the perceived duration of the simultaneous flicker and flutter presentation depended on which modality the participants attended. To further investigate the process of duration integration of the two modalities, we applied Bayesian hierarchical modeling, which enabled us to define a flexible model in which the multisensory duration is represented by the weighted average of each sensory modality. In addition, to examine whether auditory dominance results from the higher reliability of auditory stimuli, we applied another models to consider the stimulus reliability. These behavioral and modeling results suggest the following: (1) the perceived duration of visual and auditory stimuli is influenced by which modality the participants attended to when we control for the confounding effect of onset–offset timing of stimuli, and (2) the increase of the weight by attention affects the duration integration, even when the effect of stimulus reliability is controlled. Our models can be extended to investigate the neural basis and effects of other sensory modalities in duration integration.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.