Mobile robots in real-life settings would benefit from being able to localize sound sources. Such a capability can nicely complement vision to help localize a person or an interesting event in the environment, and also to provide enhanced processing for other capabiliti... View more
 Y. Zhang and J. Weng, “Grounded auditory development by a developmental robot,” in Proceedings INNS/IEEE International Joint Conference on Neural Networks, 2001, pp. 1059-1064.
 Y. Matsusaka, T. Tojo, S. Kubota, K. Furukawa, D. Tamiya, K. Hayata, Y. Nakano, and T. Kobayashi, “Multi-person conversation via multi-modal interface - a robot who communicate with multi-user,” in Proceedings EUROSPEECH, 1999, pp. 1723-1726.
 K. Nakadai, H. G. Okuno, and H. Kitano, “Real-time sound source localization and separation for robot audition,” in Proceedings IEEE International Conference on Spoken Language Processing, 2002, pp. 193- 196.
 H. G. Okuno, K. Nakadai, and H. Kitano, “Social interaction of humanoid robot based on audio-visual tracking,” in Proceedings of Eighteenth International Conference on Industrial and Engineering Applications of Artificial Intelligence and Expert Systems, 2002, pp. 725-735.
 J.-M. Valin, F. Michaud, J. Rouat, and D. Létourneau, “Robust sound source localization using a microphone array on a mobile robot,” in Proceedings International Conference on Intelligent Robots and Systems, 2003.
 J.-M. Valin, J. Rouat, and F. Michaud, “Microphone array post-filter for separation of simultaneous non-stationary sources,” in Proceedings ICASSP, 2004.
 R. Duraiswami, D. Zotkin, and L. Davis, “Active speech source localization by a dual coarse-to-fine search,” in Proceedings ICASSP, 2001.
 M. Omologo and P. Svaizer, “Acoustic event localization using a crosspower-spectrum phase based technique,” in Proceedings ICASSP, 1994, pp. II-273-II-276.
 F. Giraldo, “Lagrange-galerkin methods on spherical geodesic grids,” Journal of Computational Physics, vol. 136, pp. 197-213, 1997.