Comparison of Auditory-Inspired Models Using Machine-Learning for Noise Classification

Salinna Abdullah, Andreas Demosthenous and Ifat Yasin (University College London, United Kingdom (Great Britain))

Two auditory-inspired feature-extraction models, the Multi-Resolution CochleaGram (MRCG) and the Auditory Image Model (AIM) are compared on their acoustic noise classification performance, when combined with two supervised machine-learning algorithms, the ensemble bagged of decision trees or Support Vector Machine (SVM). Noise classification accuracies are then assessed in nine different sound environments with or without added speech and at different SNR ratios. The results demonstrate that classification scores using feature extraction with the MRCG model are significantly higher than when using the AIM model (p< 0.05), irrespective of machine-learning classifier. Using the SVM as a classifier also resulted in significantly better (p<0.05) classification performance over bagged trees, irrespective of feature-extraction model. Overall, the MRCG model combined with SVM provides a more accurate classification for most of the sound stimuli tested. From the comparison study, suggestions on how auditory model-plus-machine-learning can be improved for the purpose of sound classification are offered.

Conference: UKSim-AMSS 22nd International Conference on Computer Modelling and Simulation, UKSim2020

Published: Mar 25, 2020

DOI: 10.5013/IJSSST.a.21.02.20