Improve automatic detection of animal call sequences with temporal context

Funding: This work was supported by the US Office of Naval Research (grant no. N00014-17-1-2867). Many animals rely on long-form communication, in the form of songs, for vital functions such as mate attraction and territorial defence. We explored the prospect of improving automatic recognition perfo...

Full description

Bibliographic Details
Published in:Journal of The Royal Society Interface
Main Authors: Madhusudhana, Shyam, Shiu, Yu, Klinck, Holger, Fleishman, Erica, Liu, Xiaobai, Nosal, Eva-Marie, Helble, Tyler, Cholewiak, Danielle, Gillespie, Douglas, Širović, Ana, Roch, Marie A
Other Authors: University of St Andrews. School of Biology, University of St Andrews. Sea Mammal Research Unit, University of St Andrews. Scottish Oceans Institute, University of St Andrews. Sound Tags Group, University of St Andrews. Bioacoustics group, University of St Andrews. Marine Alliance for Science & Technology Scotland
Format: Article in Journal/Newspaper
Language:English
Published: 2021
Subjects:
DAS
Online Access:http://hdl.handle.net/10023/23659
https://doi.org/10.1098/rsif.2021.0297
Description
Summary:Funding: This work was supported by the US Office of Naval Research (grant no. N00014-17-1-2867). Many animals rely on long-form communication, in the form of songs, for vital functions such as mate attraction and territorial defence. We explored the prospect of improving automatic recognition performance by using the temporal context inherent in song. The ability to accurately detect sequences of calls has implications for conservation and biological studies. We show that the performance of a convolutional neural network (CNN), designed to detect song notes (calls) in short-duration audio segments, can be improved by combining it with a recurrent network designed to process sequences of learned representations from the CNN on a longer time scale. The combined system of independently trained CNN and long short-term memory (LSTM) network models exploits the temporal patterns between song notes. We demonstrate the technique using recordings of fin whale (Balaenoptera physalus) songs, which comprise patterned sequences of characteristic notes. We evaluated several variants of the CNN + LSTM network. Relative to the baseline CNN model, the CNN + LSTM models reduced performance variance, offering a 9-17% increase in area under the precision-recall curve and a 9-18% increase in peak F1-scores. These results show that the inclusion of temporal information may offer a valuable pathway for improving the automatic recognition and transcription of wildlife recordings. Publisher PDF Peer reviewed