Feedback

Speech enhancement by LSTM-based noise suppression followed by CNN-based speech restoration

ORCID
0000-0002-3393-8252
Affiliation/Institute
Institute for Communications Technology, Technische Universität Braunschweig
Strake, Maximilian; Defraene, Bruno; Fluyt, Kristoff; Tirry, Wouter;
ORCID
0000-0002-8895-5041
Affiliation/Institute
Institute for Communications Technology, Technische Universität Braunschweig
Fingscheidt, Tim

Single-channel speech enhancement in highly non-stationary noise conditions is a very challenging task, especially when interfering speech is included in the noise. Deep learning-based approaches have notably improved the performance of speech enhancement algorithms under such conditions, but still introduce speech distortions if strong noise suppression shall be achieved. We propose to address this problem by using a two-stage approach, first performing noise suppression and subsequently restoring natural sounding speech, using specifically chosen neural network topologies and loss functions for each task. A mask-based long short-term memory (LSTM) network is employed for noise suppression and speech restoration is performed via spectral mapping with a convolutional encoder-decoder network (CED). The proposed method improves speech quality (PESQ) over state-of-the-art single-stage methods by about 0.1 points for unseen highly non-stationary noise types including interfering speech. Furthermore, it is able to increase intelligibility in low-SNR conditions and consistently outperforms all reference methods.

Cite

Citation style:
Could not load citation form.

Access Statistic

Total:
Downloads:
Abtractviews:
Last 12 Month:
Downloads:
Abtractviews:

Rights

Use and reproduction: