EnBinExt

Leitung:  Prof. Dr. Jürgen Peissig
Team:  Song Li, M.Sc., Roman Schlieper, M.Sc.
Jahr:  2018
Datum:  08-11-16
Förderung:  Huawei Innovation Research Program FLAGSHIP (HIRP FLAGSHIP)
Laufzeit:  11/2016 - 11/2018
Ist abgeschlossen:  ja

Projektbeschreibung

Today’s smart devices offer an increasing capability of capturing video material in 2D or in 3D by stereoscopic recordings and playback. However, the degree of real audio immersion and perceived externalization with wearable devices is still lacking behind the perceived degree of visual immersion.

A binaural rendering system aims at artificially generating virtual 3D sounds by only using two audio channels and relying on headphone reproduction with the personal headphone system. The basic principle to create a virtual 3D audio signal via headphones is to filter a sound source with head-related transfer functions (HRTFs). However, in many binaural rendering applications the virtual 3D sound source over headphones is often perceived inside the head due to the use of non-individual HRTFs or unknown auditory cues. In addition, the headphones used to present binaurally rendered signals are important for perceived externalization. However, it is still not completely clear which acoustical headphone parameters influence and benefit the perception of externalization.

This project aims at finding out i) important auditory cues of binaural signals and ii) headphone parameters, related to perceived externalization. Furthermore, the binaural rendering algorithms should be optimized to enhance the degree of externalization, a multimodal model should be established to predict the performance of headphones with respect to the perceived externalization of binaurally reproduced sound images.

Partner