I’ll be back in York again next month! This time I’m presenting a research seminar in the Department of Music, talking about audio-sensing in participatory sound art. If you are close enough to come and listen, please do! [more info]
- A. V. Beeston, “Audio-sensing in participatory sound art: Perceptually-informed methods for room-robust sound information retrieval,” in Department of Music Research Seminar, University of York, York, UK. Invited talk. 7 Nov, 2018.
[BibTeX] [link]@inproceedings{Beeston:2018york, author = {Beeston, Amy V}, title = {{Audio-sensing in participatory sound art: Perceptually-informed methods for room-robust sound information retrieval}}, booktitle = {{Department of Music Research Seminar, University of York}}, year = {2018}, address = {York, UK. Invited talk. 7 Nov}, link = {https://www.york.ac.uk/music/news-and-events/events/research/2018-19/autumn-week-7/}, month = nov }
Abstract
In this talk I will discuss the topic of audio-sensing, or machine listening, in the context of participatory sound art. I argue that while many sound artists are intimately concerned with the processes of human listening, relatively few have yet engaged with machine listening in this context. In part, this may be due to two significant difficulties which have not yet been fully investigated: firstly, machine listening software must be tuned to the artistic idea itself (e.g. to detect specific sounding events, or to track the voices of gallery visitors); secondly, methods must be robust to background noise and reverberation in order to deal ‘sensibly’ with ever-present audio signal distortions (e.g. from other gallery visitors and from unknown room acoustic conditions). Drawing examples from my own and others’ work, I hope to facilitate sound artists’ engagement with audio-sensing methods by showing how a psychoacoustically-motivated approach to machine listening – using insights from human audition – can help to create more ‘reliable’ techniques for use in participatory sound works in art gallery settings.