This paper examines the use of Sound Sensors and audio as input material for New Interfaces for Musical Expression (NIMEs), exploring the unique affordances and character of the interactions and instruments that leverage it. This paper first examines ten cases in which audio sensors, either microphone capsules or piezoelectric contact microphones, are used as a means of translating gesture into sound. We present the results of a user study comparing sound-based sensors to other sensing modalities within the context of controlling parameters. The study suggests that the use of Sound Sensors, and Dynamic Sensing Systems in general, can enhance gestural flexibility and nuance but that they also present challenges in accuracy and repeatability.