A Canada-based assistive technology firm focused on accessible innovation partnered with Oodles to enhance its mobile app with AI-driven audio detection. The client needed a seamless way to process live microphone input and detect contextual audio patterns in real time. Oodles enabled this through deep learning model integration within a React Native environment, ensuring smooth, responsive user journeys.
The client sought Oodles to translate a pre-trained audio analytics model into a mobile-ready format while enabling real-time detection. The project covered model conversion, mobile app integration, audio streaming, real-time inference, and user navigation based on model output—all within a React Native framework tailored for accessibility and performance.
To align with the client's goal of real-time accessibility support, Oodles implemented a robust machine learning pipeline within the mobile ecosystem. The team converted the existing Python-based audio model into a TensorFlow Lite format, enabling efficient on-device inference.
Key Features Delivered: