n8nflow.net logo
By n8nflow TeamJune 1, 202516 min read

How to Scale n8n: Performance Optimization for High-Volume Workflows

Optimize n8n for production scale. Performance tuning, database optimization, queue management, worker scaling, and caching strategies for handling 100K+ workflow executions per day.

How to Scale n8n: Performance Optimization for High-Volume Workflows

How to Scale n8n: Performance Optimization for High-Volume Workflows

n8n can handle impressive throughput — but only if it's configured correctly. Whether you're running 1,000 or 100,000 executions daily, this guide covers the optimization strategies that keep your automation engine running smoothly.

Understanding n8n's Architecture

Before optimizing, understand what you're working with:

Request → Main Process → Workflow Engine → Node Executor → External API
              ↓               ↓                ↓
         PostgreSQL       In-Memory        Rate Limiter
         (state)          (execution)      (queue mode)

Quick Wins: Immediate Performance Boosts

1. Enable Queue Mode

The single biggest performance improvement:

# In your .env or n8n configuration
EXECUTIONS_MODE=queue
QUEUE_BULL_PREFIX=n8n

# Start worker processes
n8n worker --concurrency=5
n8n worker --concurrency=5  # Second worker = 10 total concurrency

Impact: Handles spikes gracefully; prevents single execution from blocking others.

2. Database Optimization

PostgreSQL is n8n's backbone. Tune it:

# postgresql.conf optimizations for n8n
shared_buffers = 256MB              # 25% of system RAM
effective_cache_size = 1GB
work_mem = 16MB
maintenance_work_mem = 64MB
random_page_cost = 1.1             # If using SSD
effective_io_concurrency = 200     # If using SSD
wal_buffers = 16MB
max_worker_processes = 4
max_parallel_workers_per_gather = 2
max_parallel_workers = 4

# Connection pooling
max_connections = 50  # n8n doesn't need many direct connections

3. Regular Execution Cleanup

Old execution data bloats your database:

-- Delete executions older than 30 days
DELETE FROM execution_entity 
WHERE "createdAt" < NOW() - INTERVAL '30 days';

-- Or use n8n's built-in pruning
-- Settings → Execution Data → Prune Data

4. Redis for Queue Mode

Dedicated Redis for queue mode, not shared with other apps:

# docker-compose.yml
services:
  redis:
    image: redis:7-alpine
    command: redis-server --maxmemory 256mb --maxmemory-policy allkeys-lru
    volumes:
      - redis_data:/data

Workflow-Level Optimizations

Minimize UI Nodes (When They Help)

Each node adds overhead. Combine simple operations:

// Before: 5 separate nodes
// Set Node → Set Node → Set Node → Set Node → HTTP Request

// After: 1 Code node + 1 HTTP Request
const data = {
  field1: 'value1',
  field2: 'value2', 
  field3: 'value3',
  field4: 'value4'
};

const response = await $http.post('https://api.example.com', data);
return response.data;

Batch Processing

Process items in batches instead of one-at-a-time:

// Instead of: 1,000 HTTP calls for 1,000 items
// Do this: 10 batch calls of 100 items each

const items = $input.all();
const BATCH_SIZE = 100;
const batches = [];

for (let i = 0; i < items.length; i += BATCH_SIZE) {
  batches.push(items.slice(i, i + BATCH_SIZE));
}

// Process each batch
const results = [];
for (const batch of batches) {
  const response = await $http.post('https://api.example.com/batch', {
    items: batch.map(item => item.json)
  });
  results.push(...response.data.results);
}

return results;

Caching External API Calls

Cache frequently-used data to reduce API calls:

// Cache with Redis
const cacheKey = `exchange_rate:${currency}`;
const cached = await redis.get(cacheKey);

if (cached) {
  return JSON.parse(cached);
}

// Fetch fresh data
const rate = await fetchExchangeRate(currency);
await redis.set(cacheKey, JSON.stringify(rate), 'EX', 3600); // 1 hour TTL

return rate;

Efficient Data Transformations

// ❌ Slow: Multiple array iterations
const filtered = items.filter(i => i.price > 0);
const mapped = filtered.map(i => ({ name: i.name, total: i.price * i.qty }));
const sorted = mapped.sort((a, b) => b.total - a.total);

// ✅ Fast: Single pass
const result = items
  .filter(i => i.price > 0)
  .map(i => ({ name: i.name, total: i.price * i.qty }))
  .sort((a, b) => b.total - a.total);

Scaling Strategies by Volume

Low Volume (< 1,000/day)

Setup: Single n8n instance, basic PostgreSQL Optimizations: None needed. Default config works fine.

Medium Volume (1,000-10,000/day)

Setup: n8n + PostgreSQL + Redis Optimizations:

  • Enable queue mode with 2 workers
  • Database connection pooling
  • 30-day execution pruning
  • Basic caching for common API calls

High Volume (10,000-100,000/day)

Setup: n8n main + 4-8 workers + PostgreSQL + Redis Optimizations:

  • PostgreSQL tuning (256MB shared buffers minimum)
  • Separate database server
  • Redis with 512MB memory
  • Load balancer for webhook endpoints
  • Rate limiting on external API calls
  • Circuit breakers for external services

Very High Volume (100,000+/day)

Setup: n8n main + 8-16 workers + dedicated PostgreSQL (RDS/Cloud SQL) + Redis cluster Additional considerations:

  • PostgreSQL read replicas for analytics queries
  • Redis cluster for queue reliability
  • Multiple n8n main instances behind load balancer
  • CDN for webhook responses (if applicable)
  • Monitoring stack (Prometheus + Grafana)
  • Log aggregation (ELK, Datadog, etc.)

Infrastructure Examples

Docker Compose (Production)

version: '3.8'
services:
  n8n-main:
    image: n8nio/n8n:latest
    environment:
      - N8N_HOST=your-domain.com
      - N8N_PROTOCOL=https
      - DB_TYPE=postgresdb
      - DB_POSTGRESDB_HOST=postgres
      - EXECUTIONS_MODE=queue
    depends_on:
      - postgres
      - redis

  n8n-worker:
    image: n8nio/n8n:latest
    command: worker --concurrency=5
    environment:
      - DB_TYPE=postgresdb
      - EXECUTIONS_MODE=queue
    deploy:
      replicas: 4

  postgres:
    image: postgres:16-alpine
    volumes:
      - pgdata:/var/lib/postgresql/data
    command: >
      postgres
      -c shared_buffers=256MB
      -c effective_cache_size=1GB
      -c max_connections=50

  redis:
    image: redis:7-alpine
    command: redis-server --maxmemory 256mb

volumes:
  pgdata:

Resource Allocation Guidelines

ComponentCPURAMDisk
n8n main2 vCPU2 GB20 GB
n8n worker1 vCPU1 GB10 GB
PostgreSQL2 vCPU4 GB100 GB SSD
Redis1 vCPU2 GB10 GB

Monitoring Performance

Key Metrics to Track

// Custom performance monitoring
const startTime = Date.now();

// ... workflow logic ...

const duration = Date.now() - startTime;

await $metrics.record('workflow.duration', duration, {
  workflow: 'payment-processing',
  status: 'success'
});

Essential metrics:

  • Workflow execution duration (p50, p95, p99)
  • Queue wait time (time in queue before execution)
  • Worker utilization (% busy across workers)
  • Database connection pool usage
  • External API latency
  • Webhook response time

Cost Optimization

Serverless/Cloud Options

PlatformBest ForMonthly Cost (medium volume)
Hetzner VPSBudget self-hosting~$10-20
DigitalOceanEasy setup~$24-48
RailwayZero-config deploys~$20-40
AWS ECSEnterprise scale~$100-300
n8n CloudNo DevOps needed~$20-100

Right-Sizing Your Instance

Don't over-provision. Start with the medium volume setup and scale up only when needed:

  1. Monitor resource usage for 2 weeks
  2. Identify bottlenecks (CPU, RAM, DB, or API)
  3. Scale that specific component
  4. Repeat

Explore our DevOps automation workflows for monitoring and scaling automation, or check out premium production templates.

Share this article

Help others discover n8n automation tips and tricks

Related Articles