Abstract

There are times when a person is in an environment where they must focus on one conversation when multiple others are happening around them. This is referred to as the cocktail party phenomenon. Individuals with impaired hearing lack this ability. This paper gives insight into how the brain handles these situations and how it filters out what a person is not focusing on. Three video monitors were placed in front of the subject and each source played video and audio. The objective was to find any changes in the accuracy of classification when video stimuli are provided. Using a g. Nautilus headset and multiple audio and video sources, EEG is collected from a subject. This data is collected for each data source. Each dataset is used to train a single machine learning classifier which distinguishes the source of sound with a certain accuracy. The results yield an average accuracy of 94.28%

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.