Skip to content

Emergency Vehicle Alert (E.V.A.)

Team Members

Sylvester Benson-Sesay
Phuc Do
Gabriel Sarch


Ross Maddox, Ph.D., Biomedical Engineering, University of Rochester


Marlene Sutliff and Steven Barnett, UR Community/Deaf Wellness Center and Daniel Brooks, President HLAA NYS Association


There is a need to ensure that drivers are alerted of approaching emergency vehicles so that they can remove themselves from the path of the emergency vehicle. It is especially a challenge for deaf, hard of hearing, and distracted drivers to identify emergency signals, which puts them at an increased risk for collision. People that are hard of hearing and deaf people are three times as likely to be involved in a motor vehicle accident and up to nine times as likely to be seriously injured in the accident (National Academy of Forensic Engineer, 2016). In this project, we developed a device for use in the car that detects emergency vehicles and notifies the driver of their presence in real-time. We trained a convolutional neural network to detect sirens in noisy environments. A demonstration of our real-time detector and design schematic is shown below.

System Levels
The figure above shows the system levels of this device. Four systems that work independently of each other but when put together give a continuous flow of events. In the first level, there is a hardware-software interaction. The microphone records the traffic soundscape in real-time. This audio is stored in a buffer that breaks the audio into 3-second chunks. In the second system, the 3-second chunks are sent to the trained convolutional neural network that gives a probability of whether a siren is present or not. The third system-level uses the probability and compares it to a set threshold which inputs a boolean of a siren or no siren present. The fourth system uses this boolean to trigger the output general-purpose input/output (GPIO)


Figure above shows the set up of the device from the left side. The Mic detects the siren from the emergency vehicle, causing the lights on the Raspberry Pi to blink green in rotation to warn drivers.
Figure above shows the set up of the device from the right side. The Power Bank and the Raspberry Pi are both held by the GPS holder to keep them stable during traffic.


  • Raspberry Pi 3
  • USB Microphone
  • Power Bank
  • LED Light Case
  • USB to Micro USB Cable
Schematic of the core of the device, raspberry pi 3, with its peripheral input components labelled
*Clipart Raspberry

Detection Software

Software Demonstration Video

Below is a video demonstrating our real-time detector while playing various types of noise.

Sample audio validation testing

Below are some urban audio samples that we ran through our CNN. You can see the raw audio signal and output detection probabilities for each of the audio samples. Click the audio file to listen to the sample.

Sample #1: soft siren with some white noise.
Sample #2: siren in noisy urban environment.
Sample #3: siren with car and radio background noise.
Sample #4: drilling noises (no siren)
Sample #5: a lot of urban noise (no siren)
Sample #6: Human speech with car noise (no siren)

Receiving Operating Characteristic (ROC) Curves

ROC curves are a way to compare the false alarm rate versus the true positive rate for our validation set (held out 10% of data). The area under the curve (AUC – bottom right of the graph) is a measure of detection accuracy. An AUC of 1 is a perfect detector while an AUC of 0.5 is random chance detection. We computed the ROC for different signal-to-noise (SNR) ratio sirens by embedding the sirens in different levels of car and environmental noise.

Mixed SNRs (-5, 0, 5, 10 dB SNR – same number of each)
All samples SNR of -10 dB
All samples SNR of -5 dB
All samples SNR of 0 dB
All samples 5 dB SNR
All samples 10 dB SNR
All samples 15 dB SNR


Special thanks to Prof. Amy Lerner, Prof. Scott Seidman, Mrs. Marlene Sutliff, and Mr.Daniel Brooks for providing us with the opportunity to participate in a wonderful project.

Special thanks to Prof. Ross Maddox, Prof. Steven Barnett, Luke McConnaghy, and many others who helped us throughout these last two semesters. Without you guys, we wouldn’t have been able to complete our goals for the project.

Meet the Team

Contact Information:


Return to the top of the page