aiShare Your Requirements
Technologies Involved:
TENSORFLOW
Area Of Work: Machine Learning
Project Description

A Canada-based assistive technology firm focused on accessible innovation partnered with Oodles to enhance its mobile app with AI-driven audio detection. The client needed a seamless way to process live microphone input and detect contextual audio patterns in real time. Oodles enabled this through deep learning model integration within a React Native environment, ensuring smooth, responsive user journeys.

Scope Of Work

The client sought Oodles to translate a pre-trained audio analytics model into a mobile-ready format while enabling real-time detection. The project covered model conversion, mobile app integration, audio streaming, real-time inference, and user navigation based on model output—all within a React Native framework tailored for accessibility and performance.

Our Solution

To align with the client's goal of real-time accessibility support, Oodles implemented a robust machine learning pipeline within the mobile ecosystem. The team converted the existing Python-based audio model into a TensorFlow Lite format, enabling efficient on-device inference. 

Key Features Delivered:

  • TensorFlow Lite Model Conversion: Enabled mobile compatibility and reduced processing latency.
  • Live Audio Capture Pipeline: Integrated continuous microphone streaming and pre-processing to match model input structure.
  • Real-time Detection Engine: Analyzed live audio data and returned contextual results instantly without external API calls.
  • User Flow Automation: Dynamically navigated users through the app based on detection events, enhancing interactivity.
  • Result Formatting and Filtering: Added output layers to assess, log, and react to specific detection criteria in real time.