aiShare Your Requirements
Technologies Involved:
RASA
Area Of Work: Machine Learning
Project Description

A Germany-based SaaS provider specializing in AI-driven automation approached Oodles to deploy a scalable chatbot server using Rasa. With growing demand from enterprise clients, the company needed a solution that could host multiple client-specific bots under a single infrastructure. The engagement focused on multi-bot architecture, cloud deployment, and real-time channel integrations.

Scope Of Work

The client sought Oodles to deploy a Rasa chatbot server from a local environment to the cloud with support for managing multiple bots for different clients. The project involved server provisioning, Rasa framework deployment, bot instance separation, and integration with communication platforms like WhatsApp and Facebook Messenger.

Our Solution

Oodles implemented a production-grade chatbot deployment strategy tailored for multi-client SaaS use. 

Key Features Implemented:

  • Rasa Chatbot Server Setup: Deployed the chatbot system on a secure cloud server (AWS), moving from local to production.
  • Multi-Bot Architecture: Enabled client-wise isolation using individual NLU pipelines and domain configurations for each bot.
  • Conversational AI Customization: Configured Rasa Core and Rasa NLU for contextual flow, intent classification, and entity recognition.
  • Third-Party Channel Integration: Connected bots to WhatsApp, Telegram, and Facebook Messenger using Rasa Connectors.
  • Containerization with Docker: Ensured environment consistency and simplified bot updates and scaling.