Current autonomous agent animation research models vision-based perception of agents using abstract perception queries such as line-of-sight ray casts and view cone intersections with the environment. Prior work has developed computational models for sound synthesis and propagation, with limited work that factors the perception of sound into agent behavior. In prior work, visual perception may be used for agent steering, while audio perception is uncommon.