About vak
#
The vak
library has two main goals:
make it easier for researchers studying animal vocalizations to apply neural network algorithms to their data
provide a common framework for benchmarking neural network algorithms on tasks related to animal vocalizations
Neural network algorithms in vak
help answer questions about vocal behavior.
We use the term “vocal behavior” to encompass related research areas
including animal communication [1],
cultural evolution [2],
and vocal learning [3] [4].
Models in the vak
library
include deep learning algorithms developed for bioacoustics [5],
but are designed specifically for computational studies of vocal behavior [6].
The library was developed by David Nicholson and Yarden Cohen for experiments assessing performance of TweetyNet, a neural network that automates annotation of birdsong, by segmenting spectograms into the units of song, called syllables.