The intersection between human listening and machine listening is the focus of my personal research, and also underlies the projects in which I have been employed. I am particularly interested in the processes by which the healthy human auditory system adapts to the environmental context, compensating for room acoustics and background noise. My work contributes to research efforts that (i) examine these ‘perceptual constancy’ effects in human audition and (ii) simulate these effects to develop robust machine listeners for real-world tasks.
I was originally motivated to begin this work as a result of my sound art practice. I created installations that responded intuitively to live sound in a given room, but struggled to engineer sonic interactions that could reliably survive a change of venue.
My PhD examined perceptual constancy in real-room listening in the domain of speech sciences. I undertook a series of listening experiments to test hypotheses about the manner and timescales at which human listeners adapt to the room, and built a computational model that simulates the main characteristics of this compensation mechanism.
I have since worked in software engineering projects that encapsulate principles of human audition in machine listeners for educational, clinical and industrial applications. I have contributed to the research design and resulting codebase for tools in computer assisted pronunciation training, conversational rehabilitation for cochlear implant users, and for bedside snore detection and assessment via smartphone. I recently joined the University of Leeds to work in a project exploring the music listening behaviour of people with hearing aids.