Abstract

Most previous works on sound event detection (SED) are based on binary hard labels of sound events, leaving other scales of information underexplored. To address this problem, we introduce multiple granularities of knowledge into the system to perform hierarchical acoustic information fusion for SED. Specifically, we present an interactive dual-conformer (IDC) module to adaptively fuse the medium-grained and fine-grained acoustic information based on the hard and soft labels of sound events. In addition, we propose a scene-dependent mask estimator (SDME) module to extract the coarse-grained information from acoustic scenes, introducing the scene-event relationships into the SED system. Experimental results show that the proposed IDC and SDME modules efficiently fuse the acoustic information at different scales and therefore further improve the SED performance. The proposed system achieved Top 1 performance in DCASE 2023 Challenge Task 4B.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.