- Search
- 1.877.235.1004
- Contact Us
Smart Audio Awareness Glasses
Machine Learning-Powered Wearable for Real-Time Audio Direction Detection
Product Design Requirements
David Thorn, Founder and President of Purple Dragon LLC, engaged Design 1st to explore the feasibility of a wearable audio direction-finding device mounted on eyeglasses.
The concept: a multi-microphone array that listens to surrounding sound, determines its direction, and communicates direction back to the wearer through embedded LEDs, enabling people with hearing impairments to visually perceive the direction of sound around them.
- Build a proof-of-concept multi-microphone device demonstrating audio source direction detection, volume/amplitude measurement, and tone/frequency analysis
- Design a 6-microphone circular array optimized for spatial audio capture with matched sensitivity, SNR, gain, and phase response characteristics
- Deliver a standalone functional demo system running real-time inference on embedded hardware
- Understand and prevent thermal issues over extended usage
- Primary application: assistive technology for hearing-impaired users; secondary applications include alerting workers in high-noise construction and industrial environments
- R&D engagement scoped at 632 hours across hardware (160 hrs), software (160 hrs), testing (240 hrs), industrial design (48 hrs), and supply (24 hrs)
The Electronics Engineering Challenges
Traditional analytical methods for audio source direction detection rely on complex mathematical models and perform poorly in reverberant, real-world environments. Design 1st’s electronics team selected a machine learning approach for its potential to adapt to diverse acoustic conditions without manual parameter tuning.
- Designed and characterized a 6-microphone circular array, selecting microphone type and optimizing placement spacing to capture the interchannel delay and amplitude differences critical to direction-of-arrival estimation
- Developed multi-channel audio capture circuitry with codec/ADC components capable of digitizing six simultaneous audio streams at sufficient resolution for ML feature extraction
- Evaluated OTS vs. custom board architectures, building the demo platform on a Raspberry Pi with 7” touchscreen LCD and USB-connected microphone array for rapid iteration
- Integrated an Arduino-controlled stepper motor turntable to automate microphone array rotation during data capture, enabling hundreds of samples at precisely controlled angles across full 360-degree sweeps
The Machine Learning & Software Challenges
With no existing ML model for this specific audio localization task, Design 1st’s software team built the entire data pipeline and inference system from scratch, from capture methodology through trained model deployment on edge hardware.
- Constructed a custom sound isolation test chamber with acoustic treatment for controlled dataset capture, supplementing outdoor and indoor recording sessions across multiple rooms and acoustic profiles
- Captured and curated datasets spanning 18+ recording sessions in varied environments — professional recording studios, offices, outdoor fields, and the custom test chamber — with each session producing approximately 1.8 GB of multi-channel audio data
- Trained deep learning models to correlate subtle interchannel audio differences with angle of incidence, independent of actual audio content or ambient conditions
- Utilized Pyroomacoustics software to generate simulated audio datasets with configurable room geometry, air absorption, SNR, surface materials, and reverberation time — accelerating training data generation beyond what physical capture alone could achieve
- Deployed real-time inference on the Raspberry Pi demo unit achieving 1–2 second detection latency with threshold-based activity filtering to suppress spurious background noise
Product Results
Following extensive R&D by Design 1st, the Smart Audio Awareness Glasses project validated that machine learning-based audio spatial localization is a viable path toward a wearable, head-mounted assistive device.
A standalone demo system was assembled and shipped to the client, where independent testing successfully replicated Design 1st’s directional detection findings, confirming the robustness of the ML approach outside of D1’s controlled lab environment.
- ML model achieved cross-dataset accuracy of 96.7–99.5% in direction classification across 20 angular positions, validated through extensive confusion matrix analysis across multiple independent training and test dataset combinations
- Delivered a functional Raspberry Pi-based demo unit performing real-time audio direction inference with visual feedback on a 7” LCD display, with a recorded demo video showcasing live directional detection
- Design 1st’s feasibility assessment, signed by VP Electronics Donovan Wallace, confirmed a purpose-built product designed from the ground up can accommodate the size and power constraints of a body-worn or head-mounted assistive device
- Project advanced from R&D into early Detailed Engineering phase, establishing the technical foundation for a production-ready audio spatial localization product




















