If you’ve worked with microservices long enough, you know this debate never dies:
“Should every interaction between services go through APIs?
Or is it fine if one service reads another service’s data directly—as long as writes go through APIs?”
Some engineers treat API-only communication as a religion.
Others quietly open a read-replica and get on with their day.
The truth?
Like most architectural decisions, the answer is “it depends.”
But there is a sensible middle ground that successful companies naturally converge toward.
Let’s break it down.
The Ideal World vs. The Real World
Textbook microservices say this:
“Each service owns its data. No one touches its database. Everything goes through APIs.”
That’s beautiful… but reality is messier.
Real systems deal with:
-
High traffic
-
Low latency expectations
-
Cross-service reporting
-
Aggressive SLAs
-
Event-driven workflows
-
Services that evolve at different speeds
And sometimes, calling an API for every tiny read simply isn’t practical.
But allowing everyone to poke around in each other’s databases?
That’s a recipe for pain.
So how do we balance purity with practicality?
Let’s Start With the Easy Rule: Writes Must Go Through APIs
This one is non-negotiable.
Writes carry:
-
Business rules
-
Validation
-
Authorization
-
Side effects
-
Domain events
If one service writes directly into another service’s database, it’s basically bypassing the “brain” of that system.
That’s how data gets corrupted.
That’s how invariants break.
That’s how midnight outages are born.
So we’re aligned here:
✅ Writes → Always through the service’s API
No shortcuts. No exceptions.
Reads, On the Other Hand… Are More Flexible
This is where nuance comes in.
Reads are often:
-
High volume
-
Latency-sensitive
-
Aggregation-heavy
-
Used for analytics or dashboards, not transactional logic
And hitting a service API for each read can create:
-
Extra hops
-
Failure chains
-
Scaling bottlenecks
-
Increased infrastructure cost
So it’s not surprising that many mature architectures start doing this:
**Use APIs for writes.
Use optimized data sources for reads.**
Yes — that means direct reads can be okay, but only if they’re done safely.
Let’s talk about what “safe” means.
When Direct Reads Are Safe
Direct reads don’t mean “connect to the main production database and hope nothing breaks.”
It means using a controlled, read-only source like:
1. Read Replicas
A service might read from a replica of another service’s database, isolated from writes.
2. CDC (Change Data Capture) Pipelines
Using tools like Debezium, Kafka Connect, Dynamo Streams, BigQuery Streaming, or Spanner Change Streams, a service can build a local read model of another service’s data.
3. Search and Analytics Indices
Elasticsearch, Redis, or BigQuery tables built specifically for reading.
4. Materialized Views
A “snapshot” of the data that’s updated asynchronously.
In all of these cases:
-
The write service remains the source of truth
-
No one is corrupting data
-
Schema evolution can be managed
-
Performance is optimized
This pattern is basically CQRS, but applied across microservices.
When Reads Should Not Be Direct
There are clear cases where you must use APIs and nothing else:
❌ If the read involves business rules
Pricing, eligibility, discount logic — these belong inside the service.
❌ If strong consistency matters
If “inventory = 1 item left,” a stale read can oversell.
❌ If the schema changes frequently
Direct consumers break easily.
❌ If security or PII restrictions apply
You don't want raw access leaking boundary controls.
In such cases, the API is the safe, stable contract.
The Architecture Most Companies End Up With
After scaling pains, outages, and refactors, most engineering orgs land here:
⭐ Writes → Always through the service’s API
⭐ Reads → Through a read-optimized, contract-driven data layer
(CDC, replicas, search indices, event streams, or materialized views)
This gives you the best of both worlds:
-
Services remain loosely coupled
-
Performance improves
-
You avoid API call chains
-
Schema changes don’t explode downstream systems
-
Traffic patterns become predictable
-
Teams can work independently
This approach isn’t about breaking microservice purity.
It’s about practical, scalable system design.
A Simple Example: Orders vs Inventory
If Inventory needs to check Orders frequently:
Bad: Inventory directly queries Orders’ main database
→ Tight coupling, fragile schema, risk of corruption
Good: Inventory gets a CDC stream or read replica of Orders
→ Local read model, fast queries, no business rule leakage
Writes?
Inventory can update Orders only through Orders’ API.
No exceptions.
Clean. Safe. Scalable.
So What’s the Final Answer?
Here’s the human, experience-based conclusion:
**✔ If you’re writing: use APIs.
✔ If you’re reading: choose whatever gives you performance and stability — as long as it’s read-only and contract-driven.**
This is how most high-scale systems operate behind the scenes.
It’s not dogma — it’s pragmatism.
Microservices aren’t about enforcing purity.
They’re about enabling teams to move fast without breaking each other.
And sometimes, that means being flexible about reads while staying strict about writes.