Abstract

Event Abstract Back to Event FLICKER-FUSION FREQUENCY FOR ACOUSTIC SIGNALS Ruben Tikidji-Hamburyan1* and Witali Dunin-Barkowski2 1 KRINC 2 NIISI RAS It is generally considered that the sensory inflow to the neural system is cut into fragments, which then are bidden to each other, based on their features, to result in sensory perception [1]. This idea is supported by flicker-fusion of visual images, when perception of continuous movement might be provided by a series of fixed visual frames. The critical frequency for fusion of the series into continuous in time visual perception is well known (8-12 Hz for oscilloscope sweeps and higher frequencies for more complex visual stimuli). It is natural to suggest that the same fusion phenomenon should be present in other sensory modalities. We test here this idea for audio signals. Of course, it seems virtually impossible to have a stopped in time acoustical signal as it is dynamical in principle. Nevertheless, we report an initial attempt to obtain a static (in some definite sense) acoustical signal. We also have obtained an experimental identification of the acoustic flicker-fusion frequency. To come to the point, we cut the electrically recorded acoustic signal into equal duration fragments and simulated full stop in time with time-reversal of the signal in each fragment. The transformed signal was presented to human subjects for subjective analysis. We determined the lowest frequency of the signal fragmentation, at which one can understand the contents of the recorded speech. For acoustic signal we used virtually noiseless Russian speech. Analysis of the signals was performed by native Russian-speaking subjects. Up to 7.5 Hz no message can be extracted from the transformed audio signal. At 10 Hz about a half of the speech information can be restored. At 12.5 Hz and higher frequencies, one can understand practically all words. For the second type of experiments we didn’t reverse the signal in fragments, but still cut the audio signal in fragments. Then, we inserted between the fragments of the original signal either periods of silence (of the same or doubled duration as the fragments themselves) or fragments of noise with the same spectrum as the speech signal. At 12.5 Hz, with silent intervals of up to doubled duration as compared to the period of cutting, the subjects can understand speech. It is perceived as speech with stuttering. If we insert noise between fragments, the signal could not be recognized. However, filling each interval between the original fragments of the signal with the same particular waveform (also generated as a sample of noise) yields understandable signal. These findings support the idea that in processing of the audio information brain uses its transformation into discrete fragments. The flicker-fusion frequency, revealed in our studies, points to the alpha-rhythm as a possible rhythm, involved in audio signal fragmentation in brain. The work was supported by Russian Foundation of Basic Research grant no. 10-07-00206 to W.L.D.B. Reference [1] Buszaki G.A. Rhythms of the Brain. Oxford University Press, 2006, 448 p. Keywords: General neuroinformatics Conference: 4th INCF Congress of Neuroinformatics, Boston, United States, 4 Sep - 6 Sep, 2011. Presentation Type: Poster Presentation Topic: General neuroinformatics Citation: Tikidji-Hamburyan R and Dunin-Barkowski W (2011). FLICKER-FUSION FREQUENCY FOR ACOUSTIC SIGNALS. Front. Neuroinform. Conference Abstract: 4th INCF Congress of Neuroinformatics. doi: 10.3389/conf.fninf.2011.08.00042 Copyright: The abstracts in this collection have not been subject to any Frontiers peer review or checks, and are not endorsed by Frontiers. They are made available through the Frontiers publishing platform as a service to conference organizers and presenters. The copyright in the individual abstracts is owned by the author of each abstract or his/her employer unless otherwise stated. Each abstract, as well as the collection of abstracts, are published under a Creative Commons CC-BY 4.0 (attribution) licence (https://creativecommons.org/licenses/by/4.0/) and may thus be reproduced, translated, adapted and be the subject of derivative works provided the authors and Frontiers are attributed. For Frontiers’ terms and conditions please see https://www.frontiersin.org/legal/terms-and-conditions. Received: 17 Oct 2011; Published Online: 19 Oct 2011. * Correspondence: Dr. Ruben Tikidji-Hamburyan, KRINC, Rostov-on-Don, rth@nisms.krinc.ru Login Required This action requires you to be registered with Frontiers and logged in. To register or login click here. Abstract Info Abstract The Authors in Frontiers Ruben Tikidji-Hamburyan Witali Dunin-Barkowski Google Ruben Tikidji-Hamburyan Witali Dunin-Barkowski Google Scholar Ruben Tikidji-Hamburyan Witali Dunin-Barkowski PubMed Ruben Tikidji-Hamburyan Witali Dunin-Barkowski Related Article in Frontiers Google Scholar PubMed Abstract Close Back to top Javascript is disabled. Please enable Javascript in your browser settings in order to see all the content on this page.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.