Sound frequency predicts the bodily location of auditory-induced tactile sensations in synesthetic and ordinary perception
For individuals with sound-touch synesthesia, sounds consistently evoke strong, localized sensations on the body. We systematically investigated the relationship between sound frequency and the characteristics of induced tactile experiences in synesthetes (n = 19) and controls (n = 23). Sound frequency strongly predicted the location of tactile sensations in synesthetes and controls. Synesthetes experienced touch more frequently and tended to report sensations in more spatially focused regions of the body, reflecting a sharper mapping between sound frequency and somatotopy. This spatial distribution of touch according to sound frequency reflects a behavioral mapping between tonotopy and somatotopy suggesting the involvement of early, tonotopically- and somatotopically-organized brain areas. These findings highlight a strong similarity between auditory-tactile mappings in synesthetic and ordinary perception, suggesting that synesthesia only differs in the strength of the mappings and therefore may be on a spectrum with ordinary perception. Furthermore, these findings offer insights into the neural mechanisms of sound-touch mappings, suggesting they rely on cross-modal neural pathways utilized in ordinary perception.
- Research Article
46
- 10.1016/j.cub.2020.02.048
- Mar 19, 2020
- Current Biology
Multisensory Integration Develops Prior to Crossmodal Recalibration.
- Dissertation
- 10.5451/unibas-006658217
- Jan 1, 2014
Hox genes and tonotopic organization of auditory brainstem circuits
- Abstract
- 10.1186/1471-2202-13-s1-p82
- Jul 1, 2012
- BMC Neuroscience
Most species of bats making echolocation use the sound pressure level (SPL) and Doppler-shifted frequency of ultrasonic echo pulse to measure the size and velocity of target. The neural circuits for detecting these target features are specialized for amplitude and frequency analysis of the second harmonic constant frequency (CF2) component of Doppler-shifted echoes. The neuronal circuits involved in detecting these echo features have been well known [1]. In natural situation, large natural objects in environment, like bushes or trees, produce complex stochastic echoes, which can be characterized by the echo roughness. The echo signal reflecting from a target insect is embedded in the complex signal. Even in such an environment, bats can detect accurately the detailed information of flying insect. We consider here two questions: how the neural circuits bind amplitude and frequency information of echo sound, and how bat distinguishes target information from information of background signal. To address these issues, we developed a neural network model for detecting SPL amplitude and Doppler-shifted frequency of echo sound. The model contains two hemispheres, in each of which, the network model consists of cochlea (Ch), inferior colliculus (IC), and Doppler-shifted constant frequency (DSCF) processing area. The Ch network has a frequency map by which sound frequency is encoded. The model for detecting frequency information of echo sound was based on the model previously presented by us [2].The SPL amplitude is encoded in to firing rate of IC neurons. The IC neurons encode SPL amplitude by means of a balance between excitatory connection from contralateral Ch neurons and inhibitory connection from ipsilateral ones, and then combine the amplitude and frequency information of echo sound. The DSCF network has two types of sub-networks detecting AC and DC components of echo sound, which represent the information of target and background signals, respectively. We showed that in IC, the amplitude information of echo sound is encoded by integrating the outputs of ipsi and contralateral Ch neurons and then combined with the Doppler-shifted frequency information encoded by tonotopical map of Ch neurons. The accuracy of the amplitude and frequency information was improved in the DSCF area. The model reproduced well several experimental results observed in IC and DSCF neurons. We also showed that AC and DC components of echo signal are discriminated in the two sub-networks of DSCF. The discrimination ability is due to the difference in time constant between DSCF neurons in the two sub-networks.
- Research Article
24
- 10.1523/eneuro.0078-18.2018
- Mar 1, 2018
- eNeuro
Natural sound is composed of various frequencies. Although the core region of the primate auditory cortex has functionally defined sound frequency preference maps, how the map is organized in the auditory areas of the belt and parabelt regions is not well known. In this study, we investigated the functional organizations of the core, belt, and parabelt regions encompassed by the lateral sulcus and the superior temporal sulcus in the common marmoset (Callithrix jacchus). Using optical intrinsic signal imaging, we obtained evoked responses to band-pass noise stimuli in a range of sound frequencies (0.5–16 kHz) in anesthetized adult animals and visualized the preferred sound frequency map on the cortical surface. We characterized the functionally defined organization using histologically defined brain areas in the same animals. We found tonotopic representation of a set of sound frequencies (low to high) within the primary (A1), rostral (R), and rostrotemporal (RT) areas of the core region. In the belt region, the tonotopic representation existed only in the mediolateral (ML) area. This representation was symmetric with that found in A1 along the border between areas A1 and ML. The functional structure was not very clear in the anterolateral (AL) area. Low frequencies were mainly preferred in the rostrotemplatal (RTL) area, while high frequencies were preferred in the caudolateral (CL) area. There was a portion of the parabelt region that strongly responded to higher sound frequencies (>5.8 kHz) along the border between the rostral parabelt (RPB) and caudal parabelt (CPB) regions.
- Research Article
- 10.30574/ijsra.2024.13.2.2130
- Nov 30, 2024
- International Journal of Science and Research Archive
Pure consciousness, often described as a state of heightened awareness or transcendence beyond ordinary perception, has long intrigued philosophers, neuroscientists, and psychologists alike. Recent advances in neuroendocrinology suggest that oxytocin, a hormone traditionally associated with social bonding, empathy, and trust, may play a significant role in modulating states of pure consciousness. This review explores the intersection between oxytocin and the neurobiology of awareness, investigating how the hormone influences various neural pathways and brain structures involved in consciousness. By examining current research on oxytocin’s effects on the prefrontal cortex, amygdala, and hippocampus—regions crucial for emotional regulation, memory, and self-awareness—we propose a model where oxytocin acts as a biochemical facilitator of deeper, more connected states of consciousness. Furthermore, we explore how oxytocin's role in social bonding may extend beyond interpersonal connections, fostering a sense of unity and interconnectedness often reported in higher states of consciousness. By bridging the neurochemical, psychological, and philosophical dimensions of consciousness, this review aims to provide a comprehensive understanding of oxytocin’s potential to shape human awareness, offering new insights into both the scientific and experiential dimensions of pure consciousness.
- Research Article
6
- 10.1007/s005300050044
- Mar 1, 1997
- Multimedia Systems
Multisensory scientific data sensualization methods that utilize virtual reality technology permit the use of several human sensations, such as visual, acoustic, and tactile sensation to display numerical data. The purposes of multisensory data sensualization can be classified as follows: (a) representing the relationships between different kinds of data; (b) displaying data utilizing sensory integration: and (c) representing conditions using the compound image By using multisensory information, computers increase the ability to express data. However, these methods lead us to the question of which sensation should be used to display data most effectively. In this study, a multisensory data sensualization environment was developed in which color, loudness, sound frequency, and air flow pressure could be used to display scientific data. In particular, a wind sensation dislplay prototype using air flow pressure was developed to generate tactile sensation. A basic experiment was conducted on sensory interference when subjects used two kinds of sensations simultaneously. From these results, guidelines for the usage of multisensory information for each purpose is proposed.
- Research Article
78
- 10.1016/0091-3057(94)90231-3
- Dec 1, 1994
- Pharmacology Biochemistry and Behavior
High-frequency ultrasonic vocalization induced by intracerebral glutamate in rats
- Research Article
- 10.1017/s1355771825100988
- Jan 28, 2026
- Organised Sound
This article proposes the electromagnetic soundwalk as an anti-method for consumer research, a compositional practice that listens to the infrastructural residue of market environments without aiming to interpret, represent or explain. Using a handheld electromagnetic detector, the walk transposes imperceptible emissions into audible frequencies, revealing the operational murmur of retail systems. These include devices such as wireless payment systems, contactless terminals, touch-screen tablets and digital signage, technologies that organise and condition consumer experience, but do so silently, beneath the threshold of ordinary perception. These electromagnetic emissions trace the infrastructures that shape and facilitate consumption yet remain formally outside marketing discourse. The soundwalk stages a form of methodological estrangement, where listening becomes a way of staying with systems that persist without expressive form. While rooted in soundwalking traditions, the project diverges from immersion or participation. Positioned within the sonic turn in consumer research, the paper reframes sound as residue, an ambient trace of logistical systems. For marketing, this is a speculative proposition. For sound studies, it is an example of compositional listening used to breach an adjacent field. What results is not a soundwalk for its own sake, but an acoustic method for hearing how consumer systems continue, quietly and without reward. The first section of the paper adopts a speculative and affective tone, free of citation, to evoke the experiential register of the method. Subsequent sections develop the theoretical and methodological foundations in a more conventional academic voice.
- Discussion
39
- 10.1016/j.biopsych.2011.11.009
- Dec 10, 2011
- Biological Psychiatry
In Search of Psychosis Biomarkers in High-risk Populations: Is the Mismatch Negativity the One We've Been Waiting for?
- Conference Article
- 10.1109/icit.2016.7475049
- Mar 1, 2016
Users are able to experience a high immersive feeling easily in a virtual environment with the widespread availability of human-machine interface devices such as head-mounted displays and less expensive human motion sensors. The introduction of such electronic devices has led to more research being conducted in the area of the interaction of a human with robots and machine systems in virtual reality environment and a augmented reality space. Human-machine interface (HMI) researchers are looking for new ways to deal various information, and a variety of studies that present tactile experiences to users are being introduced by employing tactile actuators and devices. Tactile higher-level perception such as the phantom sensations and the apparent movement is known as tactile illusions, by which a user feels virtual sensation as being stimulated at different points on skin at the same time, and could be also effectively used for presenting better tactile sensations. In this study, we focus on the tactile perception of the apparent movement in a human body, and develop a system, by which a user has special tactile experience in an environment with a mobile robot to know its situation such as the moving speed and the roughness of the located ground and the wall as a feedback from the robot.
- Research Article
3
- 10.1002/npr2.12090
- Dec 1, 2019
- Neuropsychopharmacology Reports
AimsThe brain function that detects deviations in the acoustic environment can be evaluated with mismatch negativity (MMN). MMN to sound duration deviance has recently drawn attention as a biomarker for schizophrenia. Nonhuman animals, including rats, also exhibit MMN‐like potentials. Therefore, MMN research in nonhuman animals can help to clarify the neural mechanisms underlying MMN production. However, results from preclinical MMN studies on duration deviance have been conflicting. We investigated the effect of sound frequency on MMN‐like potentials to duration deviance in rats.MethodsEvent‐related potentials were recorded from an electrode placed on the primary auditory cortex of free‐moving rats using an oddball paradigm consisting of 50‐ms duration tones (standards) and 150‐ms duration tones (deviants) at a 500‐ms stimulus onset asynchrony. The sound frequency was set to three conditions: 3, 12, and 50 kHz.ResultsMMN‐like potentials that depended on the short‐term stimulus history of background regularity were only observed in the 12‐kHz tone frequency condition.ConclusionsMMN‐like potentials to duration deviance are subject to tone frequency of the oddball paradigm in rats, suggesting that rats have distinct sound duration recognition ability.
- Research Article
14
- 10.1016/j.infbeh.2005.05.006
- May 31, 2005
- Infant Behavior and Development
Haptic perception and the psychosocial functioning of preterm, low birth weight infants
- Research Article
7
- 10.1049/ccs2.12008
- Apr 16, 2021
- Cognitive Computation and Systems
Aiming at the research of assisted blind technology, a generative adversarial network model was proposed to complete the transformation of the mode from vision to touch. Firstly, two key representations of visual to tactile sense are identified: the texture image of the object and the audio frequency that generates vibrotactile. It is essentially a matter of generating audio from images. The authors propose a cross‐modal network framework that generates corresponding vibrotactile signals based on texture images. More importantly, the network structure is an end‐to‐end, which eliminates the traditional intermediate form of converting texture image to spectrum image, and can directly carry out the transformation from visual to tactile. A quantitative evaluation system is proposed in this study, which can evaluate the performance of the network model. The experimental results show that the network can complete the conversion of visual information to tactile signals. The proposed method is proved to be superior to the existing method of indirectly generating vibrotactile signals, and the applicability of the model is verified.
- Research Article
1
- 10.3390/app122413004
- Dec 18, 2022
- Applied Sciences
The purpose of this study was to observe the effects of audible and inaudible binaural beat stimuli on alpha power elicitation and compare the differences in triggering effects depending on sound perception. Experiments were conducted on healthy male and female subjects (11 males and 10 females, mean age of 24.6 ± 1.8). To induce alpha waves, audible (250 Hz) or non-audible baseline sound frequencies (18,000 Hz) were presented to the left ear, and a frequency 10 Hz higher than the baseline was presented to the right ear. There were two experimental phases: a rest phase (5 min) in which no stimulus was presented and a stimulation phase (5 min) in which the binaural beat stimulus was presented. An electroencephalogram was measured at a sampling rate of 500 Hz, and relative alpha power values were calculated for each phase in each brain area. In the central regions, both baseline frequencies (audible and inaudible) increased the relative alpha power during the stimulation phase compared with the rest phase, and there were no differences between the two baseline frequencies. In the frontal and central regions, there was a greater increase in relative alpha power in the audible case compared with the inaudible case.
- Research Article
52
- 10.14814/phy2.12465
- Jul 1, 2015
- Physiological Reports
A recent study showed that fingertip pads’ tactile sensation can improve by applying imperceptible white-noise vibration to the skin at the wrist or dorsum of the hand in stroke patients. This study further examined this behavior by investigating the effect of both imperceptible and perceptible white-noise vibration applied to different locations within the distal upper extremity on the fingertip pads’ tactile sensation in healthy adults. In 12 healthy adults, white-noise vibration was applied to one of four locations (dorsum hand by the second knuckle, thenar and hypothenar areas, and volar wrist) at one of four intensities (zero, 60%, 80%, and 120% of the sensory threshold for each vibration location), while the fingertip sensation, the smallest vibratory signal that could be perceived on the thumb and index fingertip pads, was assessed. Vibration intensities significantly affected the fingertip sensation (P < 0.01) in a similar manner for all four vibration locations. Specifically, vibration at 60% of the sensory threshold improved the thumb and index fingertip tactile sensation (P < 0.01), while vibration at 120% of the sensory threshold degraded the thumb and index fingertip tactile sensation (P < 0.01) and the 80% vibration did not significantly change the fingertip sensation (P > 0.01), all compared with the zero vibration condition. This effect with vibration intensity conforms to the stochastic resonance behavior. Nonspecificity to the vibration location suggests the white-noise vibration affects higher level neuronal processing for fingertip sensing. Further studies are needed to elucidate the neural pathways for distal upper extremity vibration to impact fingertip pad tactile sensation.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.