ASCIIMath creating images

Tuesday, February 2, 2016

The Selective Binaural Beamformer: It's out!

After six or so months of going through the peer review gauntlet, our paper on the Selective Binaural Beamformer (or simply SBB) is finally published.  Thanks to all my coauthors (Menno, Daniel, Simon, and Steven) as well as the reviewers (especially reviewer #2, who gave very tough but important feedback) I think this became a very nice paper.  Please go ahead and read it at (EURASIP Journal on Advances in Signal Processing, full title "Speech enhancement for multimicrophone binaural hearing aids aiming to preserve the spatial auditory scene"): it's open access, one can read it either at the above address or download a PDF (see the right sidebar on the linked page).  Being open access, it's free and CC-A 4.0 licensed. 

The basic idea behind the algorithm is this: Normally, if using a beamforming algorithm on a binaural hearing aid, the entire auditory image will collapse to the position of the beam direction, that is ALL sound will appear (to the hearing aid user) to originate from the same location.  Various methods have been proposed to fix this - Simon Doclo in particular has done a lot of work on this topic (which is why it was so helpful to have him as coauthor).  My approach to this problem was to take the signal in the STFT domain (ie, the signal is divided into discrete short time frames and narrow frequency bins) and in each "bin" (time-frequency unit) make a decision if the target signal is dominant, or if the background noise is dominant.  In the first case, I use the beamformer output: the signal is enhanced, collapsed, but that's OK - it _should_ be coming from the target direction anyways.  In the second case, I simply use the signal as it comes from the two microphones closest to the ear canal, without processing - hence there is (almost) no difference from the "real" signal reaching the ears.  So, all the benefit of the beamformer without the nasty collapse of the auditory field!

...Well, mostly.  The tricky part is to make a good speech/noise decision (or actually a "target signal"/background noise decision).  But there's a fancy SNR estimator in there, from Adam Kuklasinski (see ref. 19 - I met him in Lisbon where he presented it at EUSIPCO), and that works pretty well.

So if this is the kind of thing that seems interesting to you, read the paper - and I will post some of the sample files (that were used during subjective testing) soonish on my personal homepage.