When NOT to Use Kafka: 5 Scenarios Where Simpler Wins

Kafka adds complexity you may not need. Five scenarios where PostgreSQL LISTEN/NOTIFY, Redis Streams, SQS, or simple webhooks outperform Kafka.

Stéphane DerosiauxStéphane Derosiaux · May 21, 2024 ·
When NOT to Use Kafka: 5 Scenarios Where Simpler Wins

I've consulted for teams debugging consumer group rebalances at 3 AM for systems processing 50 messages per hour. Kafka is powerful. It's also frequently overkill.

Teams adopt Kafka because they've heard it handles "big data" or because a blog post made it sound like the obvious choice. Six months later, they're managing a distributed system for traffic a PostgreSQL table could handle.

We spent months configuring brokers, Schema Registry, and ZooKeeper for 2,000 messages per day. Then we realized: that's 1.4 messages per minute. A cron job would've worked.

Engineering Manager at a consulting firm

Scenario 1: Low Volume (Under 10K Messages/Day)

At 2,000 messages/day, you're processing 1.4 messages per minute. The infrastructure overhead of Kafka—brokers, partitions, consumer groups, monitoring—isn't justified.

Better alternatives:

-- PostgreSQL as a queue with advisory locks
SELECT * FROM tasks
WHERE status = 'pending'
  AND pg_try_advisory_xact_lock(id)
ORDER BY created_at LIMIT 1
FOR UPDATE SKIP LOCKED;

AWS SQS or GCP Pub/Sub provide managed queuing without infrastructure to maintain.

Scenario 2: Fire-and-Forget Notifications

Real-time notifications, typing indicators, live dashboards—these don't need durability. If a user misses a "someone is typing" message, nobody cares.

Kafka's durability is wasted: messages written to disk, offsets tracked, retention configured.

Better alternative: Redis Pub/Sub is fire-and-forget by design. No persistence, no offset management. Sub-millisecond latency.

RequirementRedis Pub/SubKafka
Message persistenceNoYes
Replay capabilityNoYes
Operational complexityLowHigh
LatencySub-millisecondMilliseconds

Scenario 3: Request-Reply Patterns

Service A publishes a request, Service B responds on a different topic, Service A correlates by ID. This works. It's also fighting Kafka's design.

The problems: Correlation complexity, batching delays, response timeouts.

Better alternative: HTTP/gRPC. Simpler, faster, easier to debug.

Scenario 4: Simple Batch Processing

A nightly ETL job extracts from a database, transforms, and loads elsewhere. Adding Kafka in the middle adds infrastructure, failure points, and exactly-once configuration for a single consumer.

What you actually need:

rows = source_db.execute("SELECT * FROM orders WHERE date = %s", [yesterday])
s3.put_object(Body=rows.to_parquet(), Key=f'orders/{yesterday}.parquet')

Signs you're using Kafka as a glorified file system: one producer, one consumer, polling once per hour, waiting for "end of batch" markers.

Scenario 5: Two-Service Architectures

Two services need async communication. Kafka adds 3+ broker nodes, Schema Registry, monitoring, on-call rotations, partition management.

Better alternatives:

  • SQS: No infrastructure, built-in retries, dead-letter queues, pay-per-message
  • Webhooks: If Service B exposes an HTTP endpoint, a webhook with retry logic is often enough

When Kafka IS Right

Kafka genuinely solves hard problems:

  • High throughput: Hundreds of thousands to millions of events/second
  • Event sourcing: Replaying history to rebuild state
  • Multi-consumer patterns: 10 services consuming the same events, each with its own offset
  • Decoupling at scale: Hundreds of microservices across teams
  • Compliance and audit: Immutable event logs for regulated industries

Decision Framework

QuestionIf No → Consider
> 10K messages/day?PostgreSQL, SQS
Multiple independent consumers?Direct calls, webhooks
Need to replay historical events?Simple queue + database
Need sub-100ms latency?Redis, direct calls
Team has streaming experience?Managed services
System will grow to 10+ services?Start with queues, migrate later

The Real Cost

Kafka's complexity isn't free: broker configuration, partition rebalancing, disk management, Kafka-specific expertise, specialized debugging tools, minimum 3 brokers for production. When Kafka is the right choice, centralized management reduces operational overhead significantly.

Kafka is powerful when you need it. The skill is knowing when you don't.

Book a demo to see how Conduktor Console provides centralized visibility and governance without adding more infrastructure.