PROJECT TITLE :
Phase Processing for Single-Channel Speech Enhancement: History and recent advances
With the advancement of technology, both assisted listening devices and speech communication devices are becoming a lot of portable and conjointly additional frequently used. As a consequence, users of devices like hearing aids, cochlear implants, and mobile telephones, expect their devices to work robustly anywhere and at any time. This holds in particular for challenging noisy environments sort of a cafeteria, a restaurant, a subway, a factory, or in traffic. One method to making assisted listening devices strong to noise is to apply speech enhancement algorithms. To boost the corrupted speech, spatial diversity will be exploited by a constructive combination of microphone signals (so-known as beamforming), and by exploiting the various spectro?temporal properties of speech and noise. Here, we concentrate on single-channel speech enhancement algorithms that rely on spectrotemporal properties. On the one hand, these algorithms can be employed when the miniaturization of devices only permits for employing a single microphone. On the opposite hand, when multiple microphones are out there, single-channel algorithms can be employed as a postprocessor at the output of a beamformer. To take advantage of the short-term stationary properties of natural sounds, several of those approaches process the signal during a time-frequency illustration, most often the short-time discrete Fourier remodel (STFT) domain. During this domain, the coefficients of the signal are complex-valued, and can so be represented by their absolute value (cited within the literature both as STFT magnitude and STFT amplitude) and their phase. While the modeling and processing of the STFT magnitude has been the center of interest in the past three decades, part has been largely ignored.
Did you like this research project?
To get this research project Guidelines, Training and Code... Click Here