n8nflow.net logo

Create a Telegram Bot with Mistral Nemotron AI and Conversation Memory

by Ajith joseph•Updated: Last update 3 months ago•Source: n8n.io
Loading workflow viewer...

Getting Started

šŸ¤– Create a Telegram Bot with Mistral AI and Conversation Memory

A sophisticated Telegram bot that provides AI-powered responses with conversation memory. This template demonstrates how to integrate any AI API service with Telegram, making it easy to swap between different AI providers like OpenAI, Anthropic, Google AI, or any other API-based AI model.

šŸ”§ How it works

The workflow creates an intelligent Telegram bot that:

  • šŸ’¬ Maintains conversation history for each user
  • 🧠 Provides contextual AI responses using any AI API service
  • šŸ“± Handles different message types and commands
  • šŸ”„ Manages chat sessions with clear functionality
  • šŸ”Œ Easily adaptable to any AI provider (OpenAI, Anthropic, Google AI, etc.)

āš™ļø Set up steps

šŸ“‹ Prerequisites

  • šŸ¤– Telegram Bot Token (from @BotFather)
  • šŸ”‘ AI API Key (from any AI service provider)
  • šŸš€ n8n instance with webhook capability

šŸ› ļø Configuration Steps

  1. šŸ¤– Create Telegram Bot

    • Message @BotFather on Telegram
    • Create new bot with /newbot command
    • Save the bot token for credentials setup
  2. 🧠 Choose Your AI Provider

    • OpenAI : Get API key from OpenAI platform
    • Anthropic : Sign up for Claude API access
    • Google AI : Get Gemini API key
    • NVIDIA : Access LLaMA models
    • Hugging Face : Use inference API
    • Any other AI API service
  3. šŸ” Set up Credentials in n8n

    • Add Telegram API credentials with your bot token
    • Add Bearer Auth/API Key credentials for your chosen AI service
    • Test both connections
  4. šŸš€ Deploy Workflow

    • Import the workflow JSON
    • Customize the AI API call (see customization section)
    • Activate the workflow
    • Set webhook URL in Telegram bot settings

✨ Features

šŸš€ Core Functionality

  • šŸ“Ø Smart Message Routing : Automatically categorizes incoming messages (commands, text, non-text)
  • 🧠 Conversation Memory : Maintains chat history for each user (last 10 messages)
  • šŸ¤– AI-Powered Responses : Integrates with any AI API service for intelligent replies
  • ⚔ Command Support : Built-in /start and /clear commands

šŸ“± Message Types Handled

  • šŸ’¬ Text Messages : Processed through AI model with context
  • šŸ”§ Commands : Special handling for bot commands
  • āŒ Non-text Messages : Polite error message for unsupported content

šŸ’¾ Memory Management

  • šŸ‘¤ User-specific chat history storage
  • šŸ”„ Automatic history trimming (keeps last 10 messages)
  • 🌐 Global state management across workflow executions

šŸ¤– Bot Commands

  • /start šŸŽÆ - Welcome message with bot introduction
  • /clear šŸ—‘ļø - Clears conversation history for fresh start
  • Regular text šŸ’¬ - Processed by AI with conversation context

šŸ”§ Technical Details

šŸ—ļø Workflow Structure

  1. šŸ“” Telegram Trigger - Receives all incoming messages
  2. šŸ”€ Message Filtering - Routes messages based on type/content
  3. šŸ’¾ History Management - Maintains conversation context
  4. 🧠 AI Processing - Generates intelligent responses
  5. šŸ“¤ Response Delivery - Sends formatted replies back to user

šŸ¤– AI API Integration (Customizable)

Current Example (NVIDIA):

  • Model: mistralai/mistral-nemotron
  • Temperature: 0.6 (balanced creativity)
  • Max tokens: 4096
  • Response limit: Under 200 words

šŸ”„ Easy to Replace with Any AI Service:

OpenAI Example:

{
  "model": "gpt-4",
  "messages": [...],
  "temperature": 0.7,
  "max_tokens": 1000
}

Anthropic Claude Example:

{
  "model": "claude-3-sonnet-20240229",
  "messages": [...],
  "max_tokens": 1000
}

Google Gemini Example:

{
  "contents": [...],
  "generationConfig": {
    "temperature": 0.7,
    "maxOutputTokens": 1000
  }
}

šŸ›”ļø Error Handling

  • āŒ Non-text message detection and appropriate responses
  • šŸ”§ API failure handling
  • āš ļø Invalid command processing

šŸŽØ Customization Options

šŸ¤– AI Provider Switching

To use a different AI service, modify the "NVIDIA LLaMA Chat Model" node:

  1. šŸ“ Change the URL in HTTP Request node
  2. šŸ”§ Update the request body format in "Prepare API Request" node
  3. šŸ” Update authentication method if needed
  4. šŸ“Š Adjust response parsing in "Save AI Response to History" node

🧠 AI Behavior

  • šŸ“ Modify system prompt in "Prepare API Request" node
  • šŸŒ”ļø Adjust temperature and response parameters
  • šŸ“ Change response length limits
  • šŸŽÆ Customize model-specific parameters

šŸ’¾ Memory Settings

  • šŸ“Š Adjust history length (currently 10 messages)
  • šŸ‘¤ Modify user identification logic
  • šŸ—„ļø Customize data persistence approach

šŸŽ­ Bot Personality

  • šŸŽ‰ Update welcome message content
  • āš ļø Customize error messages and responses
  • āž• Add new command handlers

šŸ’” Use Cases

  • šŸŽ§ Customer Support : Automated first-line support with context awareness
  • šŸ“š Educational Assistant : Homework help and learning support
  • šŸ‘„ Personal AI Companion : General conversation and assistance
  • šŸ’¼ Business Assistant : FAQ handling and information retrieval
  • šŸ”¬ AI API Testing : Perfect template for testing different AI services
  • šŸš€ Prototype Development : Quick AI chatbot prototyping

šŸ“ Notes

  • 🌐 Requires active n8n instance for webhook handling
  • šŸ’° AI API usage may have rate limits and costs (varies by provider)
  • šŸ’¾ Bot memory persists across workflow restarts
  • šŸ‘„ Supports multiple concurrent users with separate histories
  • šŸ”„ Template is provider-agnostic - easily switch between AI services
  • šŸ› ļø Perfect starting point for any AI-powered Telegram bot project

šŸ”§ Popular AI Services You Can Use

ProviderModel ExamplesAPI Endpoint Style
🟢 OpenAIGPT-4, GPT-3.5https://api.openai.com/v1/chat/completions
šŸ”µ AnthropicClaude 3 Opus, Sonnethttps://api.anthropic.com/v1/messages
šŸ”“ GoogleGemini Pro, Gemini Flashhttps://generativelanguage.googleapis.com/v1beta/models/
🟔 NVIDIALLaMA, Mistralhttps://integrate.api.nvidia.com/v1/chat/completions
🟠 Hugging FaceVarious OSS modelshttps://api-inference.huggingface.co/models/
🟣 CohereCommand, Generatehttps://api.cohere.ai/v1/generate

Simply replace the HTTP Request node configuration to switch providers!