Date of Archiving
2023Archive
Radboud Data Repository
Related publications
Publication type
Dataset
Access level
Restricted access
Display more detailsDisplay less details
Organization
Neurophysiology
Neurobiology
Audience(s)
Biology
Languages used
English
Key words
hearing aids; Sound textures; noise reduction; speech enhancement; statistical learningAbstract
Human communication often occurs under adverse acoustical conditions, where speech signals mix with interfering background noise. A substantial fraction of interfering noise can be characterized by a limited set of statistics and has been referred to as auditory textures. Recent research in neuroscience has demonstrated that humans and animals utilize these statistics for recognizing, classifying, and suppressing textural sounds. Here, we propose a fast, domain-free noise suppression method exploiting the stationarity and spectral similarity of sound sources that make up sound textures, termed Statistical Sound Filtering (SSF). SSF represents a library of spectrotemporal features of the background noise and then compares this against instants in speech-noise-mixtures to subtract contributions that are statistically consistent with the interfering noise. We evaluated the performance of SSF using multiple quality measures and human listeners on the standard TIMIT corpus of speech utterances. SSF improved the sound quality across all performance metrics, capturing different aspects of the sound. Additionally, human participants reported reduced background noise levels as a result of filtering, without any significant damage to speech quality. SSF executes rapidly (~100x real-time) and can be retrained rapidly and continuously in changing acoustic contexts. SSF can thus be integrated in hearing aids where power-efficient, fast and adaptive training and execution are critical.
This item appears in the following Collection(s)
- Datasets [1912]
- Faculty of Science [37995]