Our main research topic is “inverse problems for acoustic field” and “signal processing for sound field recording, transmission, and reproduction”. Details are described below.
Inverse problems for acoustic field: We tackle with inverse problems for acoustic field, such as sound field imaging, analysis, source localization, and estimation of room acoustic parameters. We pursuit new methodologies with various approaches (optimization, machine learning, etc.) and develop systems to achieve these purposes.
Signal processing for sound field recording, transmission, and reproduction: We deal with a broad range of problems for sound field recording, transmission, and reproduction. By using these methodologies, we develop new systems for telecommunication, virtual reality, and so on.
Slides for introduction of our research topics
Sound field recording and reproduction
Left: original sound field, Right: reproduced sound field with circular loudspeaker array
Sound field recording and reproduction is intended to high-fidelity reconstruction of a sound space in a physical sense. By using arrays of multiple microphones and loudspeakers and an appropriate signal transform, high-accuracy sound field reproduction is achieved.
- S. Koyama, et al., “Analytical approach to wave field reconstruction filtering in spatio-temporal frequency domain,” IEEE Trans. Audio, Speech, Lang. Process., vol. 21, no. 4, pp. 685-696, 2013.
- S. Koyama, et al., “Wave field reconstruction filtering in cylindrical harmonic domain for with-height recording and reproduction,” IEEE/ACM Trans. Audio, Speech, Lang. Process., vol. 22, no. 10, pp. 1546-1557, 2014.
- S. Koyama, et al., “Analytical approach to transforming filter design for sound field recording and reproduction using circular arrays with a spherical baffle,” J. Acoust. Soc. Amer., vol. 139, no. 3, pp. 1024-1036, 2016.
Sparse representation for sound field analysis
Left: Plane-wave-decomposition-based reconstruction, Right: Sparse-sound-field-decomposition-based reconstruction
A sound field is generally analysed by decomposing it into plane-wave functions; however, this method suffers from spatial aliasing artifacts with severe effects at high frequencies. The analysis based on the sparse representation developed in the literature of compressed sensing enables to reduce these artifacts, which can be regarded as super-resolution in sound field analysis.
- S. Koyama and L. Daudet, “Sparse representation of a spatial sound field in a reverberant environment,” IEEE J. Selected Topics Signal Process., vol. 13, no. 1, pp. 172-184, 2019.
- S. Koyama, et al., “Sparse sound field decomposition for super-resolution in recording and reproduction,” J. Acoust. Soc. Amer., vol. 143, no. 6, pp. 3780-3795, 2018.
- N. Murata, S. Koyama, et al., “ Sparse representation using multidimensional mixed-norm penalty with application to sound field decomposition,” IEEE Trans. Signal Process., vol. 66, no. 12, pp. 3327-3338, 2018.
- S. Koyama, et al., “Sparse sound field representation in recording and reproduction for reducing spatial aliasing artifacts,” in Proc. IEEE Int. Conf. Acoust., Speech., Signal Process. (ICASSP), Florence, May 2014, pp. 4443-4447.
Optimization of sensor and actuator placement
Left: Loudspeaker and control-point placement based on empirical interpolation method for sound field control, Right: Synthesized plane wave field with optimized placement
It is a difficult problem to determine the placement of multiple sensors and actuators for analysing/controlling a wavefield. How to determine the optimization criterion for the placement? How to develop a computationally efficient algorithm? We develop a sensor/actuator placement method particularly for wavefield analysis/control, e.g., by using methods for function interpolation.
- S. Koyama, et al., “Optimizing Source and Sensor Placement for Sound Field Control: An Overview,” IEEE/ACM Trans. Audio, Speech, Language Process., 2020.
- S. Koyama, et al., “Joint Source and Sensor Placement for Sound Field Control Based on Empirical Interpolation Method,” Proc. IEEE Int. Conf. Acoust., Speech., Signal Process. (ICASSP), Calgary, 2018, pp. 501-505.