09 December, 2025

Should Microservices Only Talk Through APIs? Or Is It Okay to Read Data Directly?

 

If you’ve worked with microservices long enough, you know this debate never dies:

“Should every interaction between services go through APIs?
Or is it fine if one service reads another service’s data directly—as long as writes go through APIs?”

Some engineers treat API-only communication as a religion.
Others quietly open a read-replica and get on with their day.

The truth?
Like most architectural decisions, the answer is “it depends.”
But there is a sensible middle ground that successful companies naturally converge toward.

Let’s break it down.


The Ideal World vs. The Real World

Textbook microservices say this:

“Each service owns its data. No one touches its database. Everything goes through APIs.”

That’s beautiful… but reality is messier.

Real systems deal with:

  • High traffic

  • Low latency expectations

  • Cross-service reporting

  • Aggressive SLAs

  • Event-driven workflows

  • Services that evolve at different speeds

And sometimes, calling an API for every tiny read simply isn’t practical.

But allowing everyone to poke around in each other’s databases?
That’s a recipe for pain.

So how do we balance purity with practicality?


Let’s Start With the Easy Rule: Writes Must Go Through APIs

This one is non-negotiable.

Writes carry:

  • Business rules

  • Validation

  • Authorization

  • Side effects

  • Domain events

If one service writes directly into another service’s database, it’s basically bypassing the “brain” of that system.

That’s how data gets corrupted.
That’s how invariants break.
That’s how midnight outages are born.

So we’re aligned here:

Writes → Always through the service’s API

No shortcuts. No exceptions.


Reads, On the Other Hand… Are More Flexible

This is where nuance comes in.

Reads are often:

  • High volume

  • Latency-sensitive

  • Aggregation-heavy

  • Used for analytics or dashboards, not transactional logic

And hitting a service API for each read can create:

  • Extra hops

  • Failure chains

  • Scaling bottlenecks

  • Increased infrastructure cost

So it’s not surprising that many mature architectures start doing this:

**Use APIs for writes.

Use optimized data sources for reads.**

Yes — that means direct reads can be okay, but only if they’re done safely.

Let’s talk about what “safe” means.


When Direct Reads Are Safe

Direct reads don’t mean “connect to the main production database and hope nothing breaks.”

It means using a controlled, read-only source like:

1. Read Replicas

A service might read from a replica of another service’s database, isolated from writes.

2. CDC (Change Data Capture) Pipelines

Using tools like Debezium, Kafka Connect, Dynamo Streams, BigQuery Streaming, or Spanner Change Streams, a service can build a local read model of another service’s data.

3. Search and Analytics Indices

Elasticsearch, Redis, or BigQuery tables built specifically for reading.

4. Materialized Views

A “snapshot” of the data that’s updated asynchronously.

In all of these cases:

  • The write service remains the source of truth

  • No one is corrupting data

  • Schema evolution can be managed

  • Performance is optimized

This pattern is basically CQRS, but applied across microservices.


When Reads Should Not Be Direct

There are clear cases where you must use APIs and nothing else:

If the read involves business rules

Pricing, eligibility, discount logic — these belong inside the service.

If strong consistency matters

If “inventory = 1 item left,” a stale read can oversell.

If the schema changes frequently

Direct consumers break easily.

If security or PII restrictions apply

You don't want raw access leaking boundary controls.

In such cases, the API is the safe, stable contract.


The Architecture Most Companies End Up With

After scaling pains, outages, and refactors, most engineering orgs land here:

Writes → Always through the service’s API

Reads → Through a read-optimized, contract-driven data layer

(CDC, replicas, search indices, event streams, or materialized views)

This gives you the best of both worlds:

  • Services remain loosely coupled

  • Performance improves

  • You avoid API call chains

  • Schema changes don’t explode downstream systems

  • Traffic patterns become predictable

  • Teams can work independently

This approach isn’t about breaking microservice purity.
It’s about practical, scalable system design.


A Simple Example: Orders vs Inventory

If Inventory needs to check Orders frequently:

Bad: Inventory directly queries Orders’ main database

→ Tight coupling, fragile schema, risk of corruption

Good: Inventory gets a CDC stream or read replica of Orders

→ Local read model, fast queries, no business rule leakage

Writes?

Inventory can update Orders only through Orders’ API.
No exceptions.

Clean. Safe. Scalable.


So What’s the Final Answer?

Here’s the human, experience-based conclusion:

**✔ If you’re writing: use APIs.

✔ If you’re reading: choose whatever gives you performance and stability — as long as it’s read-only and contract-driven.**

This is how most high-scale systems operate behind the scenes.
It’s not dogma — it’s pragmatism.

Microservices aren’t about enforcing purity.
They’re about enabling teams to move fast without breaking each other.

And sometimes, that means being flexible about reads while staying strict about writes.

05 May, 2025

 

🧠 Neo4j Quirks: What You Should Know Before Going Graph

Intro

Neo4j is a powerful graph database, but like any tool, it comes with its own set of quirks—things that can trip up even experienced developers if they're expecting traditional RDBMS or NoSQL behavior. This post summarizes key “gotchas” you should be aware of when adopting Neo4j in real-world applications.


1. Relationship Directions Matter (Even When You Think They Don’t)

Neo4j relationships are directed:
(a)-[:FRIENDS_WITH]->(b) is not the same as (b)-[:FRIENDS_WITH]->(a)

Quirk:
When querying, you must match the direction unless explicitly saying otherwise:

MATCH (a)-[:FRIENDS_WITH]-(b) // direction-agnostic

Real-World Impact:
Queries may silently return nothing if you match the wrong direction.


2. Large Fanouts Can Kill Performance

What It Means:
A single node with thousands (or millions) of outgoing relationships becomes a performance bottleneck.

Quirk:
Even with indexes, traversals involving large fanouts are slow unless carefully limited.

Tip:
Avoid "celebrity nodes" or batch them with pagination:

MATCH (u:User)-[:FOLLOWS]->(f:User) WHERE u.name = "Ganapathy" RETURN f SKIP 0 LIMIT 100

3. Indexing Is Not as Granular as SQL

Quirk:
Neo4j indexes work only on node properties, not relationship properties or full-path matches.

Example:
You can't index a path like (a)-[:KNOWS]->(b).

Tip:
Use composite indexes or materialize paths into nodes if needed.


4. Orphaned Relationships Are Impossible by Design

Quirk:
You can’t create a relationship without linking it to two nodes.

Good News:
This enforces graph integrity.

Bad News:
You need to clean up nodes and relationships together:


MATCH (a)-[r]->() DELETE r, a

5. Aggregations and COLLECT Can Confuse New Users

Example:

MATCH (p:Person)-[:KNOWS]->(f) RETURN p.name, COLLECT(f.name)

Quirk:
The COLLECT() function creates lists, but mixing it with non-grouped values can lead to errors or misinterpretation.

Tip:
Always think in terms of how the result should group—Neo4j isn’t a traditional GROUP BY system.


6. No Strong Schema Enforcement

Quirk:
Neo4j is schema-optional—labels and property types are not strictly enforced.

Consequence:
You can accidentally insert inconsistent data (e.g., age as string vs number).

Mitigation:
Use CONSTRAINT and ASSERT where possible:

CREATE CONSTRAINT ON (p:Person) ASSERT p.id IS UNIQUE

7. Transactions Can Be Subtle in Cypher

Quirk:
Using the browser or driver may hide transactional nuances.

Gotcha Example:
Running CREATE in one query and expecting it to exist in a separate query in the same session may fail if autocommit isn't used.

Fix:
Wrap related operations in a transaction when using the driver:

tx := session.BeginTransaction() tx.Run(...) // commit explicitly

8. Testing and Cleanup Require Thoughtful Design

Quirk:
Neo4j's test environments don't reset easily unless you clean manually.

Tip:
Create a :TestRun node that relates to all temporary data, and clean it up:


MATCH (:TestRun)-[*]->(n) DETACH DELETE n

✅ Final Thoughts

Neo4j is expressive and great for certain data models—but it’s not magic. Understanding its quirks early helps avoid performance and correctness pitfalls.

Recommended Next Steps:

26 January, 2025

Understanding Read Your Own Writes Consistency in Distributed Systems

Have you ever updated your social media status only to refresh the page and find your update missing? Or modified a product listing that seemingly disappeared into the void? These frustrating user experiences often stem from a crucial concept in distributed systems: Read Your Own Writes (RYW) consistency.

Why RYW Consistency Matters

In my recent DZone article, I dive deep into the world of Read Your Own Writes consistency - a critical yet often overlooked aspect of distributed system design. While technical requirements often focus on scalability and performance, RYW consistency directly impacts how users perceive and interact with our systems.

Think about it: when you make a change to any system, you expect to see that change immediately. It's not just a technical preference; it's a fundamental user expectation that shapes trust in our applications.

Key Challenges in RYW Implementation

Implementing RYW consistency isn't as straightforward as it might seem. In my DZone article, I explore several key challenges:

  • Managing complex caching layers across browsers, CDNs, and application servers
  • Handling load balancing while maintaining consistency
  • Dealing with replication lag in distributed databases

These challenges become particularly evident in real-world scenarios like social media updates, collaborative document editing, and e-commerce inventory management.

Implementation Strategies That Work

The article outlines several practical implementation strategies, including:

  • Sticky sessions for consistent request routing
  • Write-through caching for immediate updates
  • Version tracking to ensure consistency

Each approach comes with its own trade-offs and considerations, which I discuss in detail in the full article.

Want to Learn More?

If you're interested in diving deeper into RYW consistency and learning about specific implementation strategies, check out my complete article on DZone

The article includes code examples, best practices, and monitoring strategies to help you implement RYW consistency effectively in your distributed systems.

Final Thoughts

As distributed systems become increasingly complex, understanding and implementing RYW consistency becomes more crucial than ever. While eventual consistency might be acceptable for some use cases, users expect their own changes to be reflected immediately.

Have you encountered RYW consistency challenges in your systems? I'd love to hear about your experiences in the comments below.


This post is a companion piece to my detailed technical article on DZone about Read Your Own Writes consistency. For implementation details and code examples, please refer to the full article.

13 December, 2024

Striking the Balance: How Companies Can Personalize Without Compromising Privacy

I recently published an article on DataVersity exploring one of today's most pressing digital challenges: how to deliver personalized experiences while protecting user privacy. As someone deeply interested in both user experience and data protection, I found this topic particularly fascinating to research and write about.

Why This Matters Now

We're all familiar with those eerily accurate Netflix recommendations or Spotify playlists that seem to read our minds. But have you ever wondered how companies can provide these personalized experiences while keeping your data safe? In my DataVersity article, I dive deep into this question, exploring real-world examples from tech giants like Apple, Google, and Netflix.

Key Insights

While I encourage you to read the full article for all the technical details, here are some interesting takeaways:

  • Local processing on your device can actually deliver powerful personalization without sending sensitive data to the cloud
  • Companies like Apple and Google are pioneering innovative approaches like federated learning
  • Even streaming giants like Netflix have found ways to provide real-time recommendations while maintaining user anonymity

The Privacy-First Future

What excites me most about this topic is how companies are proving that privacy and personalization aren't mutually exclusive. In fact, as I detail in my article, privacy-preserving approaches might be the key to building deeper user trust and loyalty.

Want to Learn More?

If you're interested in understanding how these privacy-preserving recommendation systems work, including detailed examples and implementation challenges, check out my full article on DataVersity. I explore everything from fine-grained user permissions to anonymous session data handling, complete with real-world applications and future trends.

The full article covers:

  • How local models power privacy-friendly recommendations
  • The role of federated learning in protecting user data
  • Real-time personalization through anonymous session data
  • Practical challenges and solutions from major tech companies

04 November, 2024

The Hidden Backbone of the Internet: Infrastructure, Innovations, and the Future of Connectivity

 Internet infrastructure may seem invisible, but it's a complex web of data centers, fiber-optic cables, IXPs, and cutting-edge technologies. Data centers are the backbone, powering cloud storage and digital communications, while fiber-optic networks act as high-speed highways for data. Internet Exchange Points (IXPs) allow data to be transferred more efficiently between providers, keeping speeds high and reducing latency.

Adding to this are Content Delivery Networks (CDNs) like Cloudflare and Akamai, which cache data closer to users, reducing load and improving response times for data-heavy activities like streaming.

An exciting new frontier in internet access is satellite internet. With Starlink leading the way, low-Earth orbit satellites bring broadband to rural areas, aiming to bridge the digital divide. While these projects face challenges like weather impact and high setup costs, they hold promise for expanding access globally.

Looking ahead, 5G and edge computing are set to boost speed and efficiency. Edge computing, which processes data near its source, will complement 5G’s high-speed, low-latency potential for real-time applications like autonomous driving. Meanwhile, a shift towards sustainable data centers is helping companies reduce the internet's environmental footprint, making the infrastructure more energy-efficient.

With fiber, satellite, and wireless technologies evolving in hybrid models, future internet infrastructure aims to be faster, more resilient, and accessible, catering to the demands of an increasingly digital world.

21 October, 2024

Stay tuned...

 Hello world,

 Its been a while since I contributed to this blog. 
 Probably now I'm out of the black hole ;) and will keep things interesting here.

 Stay tuned...

15 August, 2023

Rate Limiting in System Design: Protecting Your APIs and Servers

 With the rise of large-scale applications and APIs, handling excessive traffic is a major challenge. Rate limiting is a crucial technique used to protect systems from abuse, prevent DDoS attacks, and ensure fair resource allocation. In this blog, we’ll explore rate limiting strategies, their implementations, and real-world use cases.


1. What is Rate Limiting?

Rate limiting controls the number of requests a client can send to a server within a specified time frame. It helps:
Prevent server overload – Protects backend services from excessive traffic.
Enhance security – Mitigates DDoS attacks and bot abuse.
Ensure fair usage – Prevents a single user from consuming all available resources.
Optimize performance – Ensures smooth operation for all users.


2. Common Rate Limiting Algorithms

a. Token Bucket Algorithm

  • Each user has a bucket filled with tokens.
  • Each request consumes one token.
  • Tokens are refilled at a fixed rate.
  • If the bucket is empty, requests are rejected or delayed.
    Best for: APIs that require smooth traffic control (e.g., messaging apps, payment gateways).

b. Leaky Bucket Algorithm

  • Requests are added to a queue (bucket).
  • Requests are processed at a constant rate.
  • If the queue overflows, extra requests are dropped.
    Best for: Ensuring a consistent request flow (e.g., video streaming, rate-limited APIs).

c. Fixed Window Rate Limiting

  • Defines a time window (e.g., 1 minute) and allows a fixed number of requests.
  • If the limit is reached, extra requests are rejected.
    Best for: Simple and predictable rate limiting (e.g., login attempts, API calls).

d. Sliding Window Rate Limiting

  • A rolling time window is used instead of fixed intervals.
  • More flexible than fixed window since it updates counts dynamically.
    Best for: Preventing bursts while allowing smoother traffic handling.

3. Implementing Rate Limiting in APIs

a. Using API Gateways

  • Cloud providers offer built-in rate limiting in AWS API Gateway, Azure API Management, and Cloudflare.
  • Example: AWS API Gateway allows 1000 requests per second per user.

b. Implementing in Nginx

Nginx provides built-in rate limiting:

nginx
http { limit_req_zone $binary_remote_addr zone=api_limit:10m rate=5r/s; server { location /api/ { limit_req zone=api_limit burst=10 nodelay; } } }
  • Limits clients to 5 requests per second with a burst of 10.

c. Implementing in Redis

Redis can be used to track request counts:

python

import redis from flask import Flask, request, jsonify app = Flask(__name__) redis_client = redis.StrictRedis(host='localhost', port=6379, db=0) RATE_LIMIT = 10 # Max requests per minute WINDOW = 60 # 60 seconds @app.route('/api/resource') def api_resource(): user_ip = request.remote_addr key = f"rate_limit:{user_ip}" requests = redis_client.incr(key) if requests == 1: redis_client.expire(key, WINDOW) if requests > RATE_LIMIT: return jsonify({"error": "Too many requests"}), 429 return jsonify({"message": "Request successful"}) if __name__ == '__main__': app.run()
  • Allows 10 requests per minute per IP.

4. Real-World Use Cases

🔹 Login Attempt Protection – Limits failed login attempts to prevent brute-force attacks.
🔹 API Monetization – Premium users get higher request limits than free users.
🔹 DDoS Mitigation – Blocking excessive traffic from suspicious IPs.
🔹 Messaging Platforms – Controlling spam by limiting messages per user.


5. Challenges & Best Practices

Handling Burst Traffic – Use bursts + gradual rate reductions to prevent abrupt blocking.
✔️ Implement Exponential Backoff – Delay retries for failed requests.
✔️ Use Distributed Rate Limiting – Ensure consistency across multiple servers using Redis or cloud solutions.
✔️ Provide Clear Error Messages – Use HTTP 429 Too Many Requests response with retry hints.


6. Conclusion

Rate limiting is essential for protecting APIs, preventing abuse, and optimizing performance. Choosing the right strategy (e.g., token bucket for smooth control, sliding window for flexibility) can help ensure a balanced system.

12 June, 2023

Caching in System Design: Speeding Up Performance

 As applications scale, performance becomes a critical concern. One of the most effective ways to improve response times and reduce database load is by using caching. Whether you're designing a high-traffic web application or a distributed system, caching can significantly enhance speed and scalability.


1. What is Caching?

Caching is the process of storing frequently accessed data in a fast, temporary storage layer (e.g., RAM) to avoid redundant computations or database queries. Instead of fetching data from a slow backend, caching enables applications to retrieve it almost instantly.


2. Why Use Caching?

Improves Speed – Reduces the time taken to retrieve data.
Reduces Database Load – Minimizes queries and write operations.
Enhances Scalability – Handles large traffic efficiently.
Improves User Experience – Faster responses lead to better engagement.


3. Types of Caching

a. Application-Level Caching

  • Stores computed results at the application level.
  • Example: Caching API responses in memory.

b. Database Caching

  • Uses a caching layer between the application and database.
  • Example: MySQL query cache or Redis.

c. Content Delivery Network (CDN) Caching

  • Caches static content (images, CSS, JavaScript) at edge locations near users.
  • Example: Cloudflare, AWS CloudFront.

d. Distributed Caching

  • A caching system shared across multiple servers.
  • Example: Memcached, Redis Cluster.

4. Cache Invalidation Strategies

Keeping cached data up-to-date is critical. Common techniques include:

a. Time-to-Live (TTL)

  • Sets an expiration time on cached data.
  • Example: User profile cache expires every 10 minutes.

b. Write-Through Caching

  • Data is written to both the cache and database simultaneously.
  • Pros: Ensures consistency.
  • Cons: Higher write latency.

c. Cache-aside (Lazy Loading)

  • Data is loaded into the cache only when requested.
  • Pros: Reduces unnecessary caching.
  • Cons: First request may be slow.

d. Write-Back Caching

  • Data is written to the cache first and later updated in the database.
  • Pros: Improves write performance.
  • Cons: Risk of data loss if the cache fails.

5. When to Use Caching?

  • High Read Workloads (e.g., social media feeds, recommendation systems).
  • Slow Database Queries (e.g., expensive JOIN operations).
  • Session Storage (e.g., user authentication tokens).
  • Rate Limiting (e.g., storing API request counts).

6. Caching Tools & Technologies

🚀 Redis – In-memory key-value store with TTL, pub/sub, and clustering.
🚀 Memcached – Lightweight, distributed caching system.
🚀 Varnish – HTTP caching for web acceleration.
🚀 Cloudflare / AWS CloudFront – CDN-based caching for static content.


7. Example: Caching in a Social Media App

Consider a Twitter-like system with millions of users:

  1. User requests a trending tweets list.
  2. The system first checks Redis cache.
  3. If found → Serve from cache (fast response).
  4. If not found → Query the database, update cache, and return the response.

This reduces database load and improves response time for frequent queries.


8. Common Caching Pitfalls & Solutions

Cache Stampede (Thundering Herd Problem) – Too many requests to update expired cache.
✔️ Solution: Use staggered TTLs and lock mechanisms (e.g., Redis Redlock).

Stale Data – Cache serving outdated information.
✔️ Solution: Use write-through or event-driven cache invalidation.

Over-Caching – Caching unnecessary or frequently changing data.
✔️ Solution: Cache only read-heavy, slow queries.


9. Conclusion

Caching is a powerful technique for optimizing system performance. By choosing the right caching strategy and tools, you can drastically improve speed, reduce load, and scale your system efficiently.

14 March, 2023

Database Sharding: A Guide to Scaling Your Database

 As applications grow, handling massive amounts of data becomes a challenge. One of the most effective ways to scale a database is sharding—a technique that partitions large datasets into smaller, more manageable pieces across multiple servers. In this guide, we’ll explore the fundamentals of database sharding, its benefits, challenges, and real-world applications.

1. What is Database Sharding?

Database sharding is a technique where a large database is split into smaller, independent databases called shards. Each shard contains a subset of the total data and can operate independently, reducing the load on a single database instance.

For example, an e-commerce platform with millions of users could shard its database by user ID ranges, ensuring that queries for different users are processed on separate database instances.

2. Why Use Sharding?

  • Scalability: Distributes data across multiple servers, preventing bottlenecks.
  • Performance Improvement: Reduces query response times by allowing parallel processing.
  • Fault Tolerance: Limits the impact of failures—if one shard fails, only part of the system is affected.
  • Cost Efficiency: Allows horizontal scaling by adding more servers instead of upgrading a single powerful database.

3. Sharding Strategies

a. Range-Based Sharding

  • Data is partitioned based on a range of values.
  • Example: Users with IDs 1-1M go to Shard A, users with IDs 1M-2M go to Shard B.
  • Pros: Simple to implement and query.
  • Cons: Uneven load if data is skewed (e.g., one shard may receive more queries than others).

b. Hash-Based Sharding

  • A hash function is used to distribute data evenly across shards.
  • Example: shard_id = hash(user_id) % number_of_shards.
  • Pros: Prevents uneven load distribution.
  • Cons: Difficult to add new shards, as it may require redistributing existing data.

c. Geo-Based Sharding

  • Data is split based on geographic location.
  • Example: Users in North America go to Shard A, users in Europe go to Shard B.
  • Pros: Useful for applications with regional traffic (e.g., social media, e-commerce).
  • Cons: Some shards may receive more traffic than others.

d. Directory-Based Sharding

  • A lookup service determines which shard contains specific data.
  • Example: A metadata table maps customers to specific shards.
  • Pros: Flexible and allows complex sharding logic.
  • Cons: Requires an additional lookup step, increasing latency.

4. Challenges of Sharding

a. Complexity in Application Logic

  • The application must determine which shard to query, making database interactions more complex.

b. Rebalancing Data

  • When adding new shards, existing data may need to be redistributed, causing downtime or performance degradation.

c. Cross-Shard Queries

  • Queries that span multiple shards (e.g., SELECT COUNT(*) FROM users) are difficult to execute efficiently.
  • Solution: Use distributed query engines (e.g., Presto, Apache Drill).

d. Data Consistency

  • Maintaining ACID (Atomicity, Consistency, Isolation, Durability) properties across multiple shards can be challenging.
  • Solution: Use eventual consistency or distributed transactions (e.g., two-phase commit).

5. Real-World Use Cases

  • Facebook: Shards user data to scale its massive social network.
  • Amazon: Uses sharding for handling product catalogs and customer transactions.
  • Twitter: Stores tweets in different shards based on user ID hashing.

6. Best Practices for Implementing Sharding

Choose the Right Sharding Strategy: Analyze your application’s query patterns before deciding on a method.
Monitor Performance: Use load balancing to evenly distribute queries across shards.
Use Middleware for Query Routing: Tools like Vitess or Citus help manage sharded databases.
Plan for Scaling: Design a system that can accommodate future shard additions with minimal downtime.

7. Conclusion

Database sharding is a powerful technique for handling large-scale applications but comes with trade-offs. Understanding when and how to shard a database can significantly improve system scalability and performance.

27 January, 2023

Mastering Load Balancing in System Design

 In modern system design, ensuring high availability, reliability, and scalability is crucial. One of the key techniques to achieve this is load balancing. Whether you're designing a small-scale web application or a globally distributed system, a well-implemented load balancing strategy can significantly improve performance.

1. What is Load Balancing?

Load balancing is the process of distributing incoming network traffic across multiple servers to ensure no single server gets overwhelmed. It helps improve response time, maximize resource utilization, and provide redundancy in case of server failures.

2. Why is Load Balancing Important?

  • Scalability: Helps manage increasing traffic efficiently.
  • High Availability: Ensures uptime by distributing traffic across multiple servers.
  • Fault Tolerance: If one server goes down, traffic is redirected to healthy servers.
  • Improved Performance: Reduces latency by routing requests to the closest or least busy server.

3. Types of Load Balancers

a. Hardware vs. Software Load Balancers

  • Hardware Load Balancers: Dedicated physical devices optimized for high-speed traffic management (e.g., F5, Citrix ADC).
  • Software Load Balancers: Software-based solutions that run on cloud or local servers (e.g., Nginx, HAProxy, AWS Elastic Load Balancer).

b. Layer 4 vs. Layer 7 Load Balancers

  • Layer 4 (Transport Layer): Routes traffic based on IP address and port (e.g., TCP, UDP).
  • Layer 7 (Application Layer): Routes traffic based on content (e.g., HTTP headers, cookies, URLs).

4. Load Balancing Algorithms

a. Round Robin

  • Requests are distributed sequentially across servers in a circular order.
  • Best for: Equal-capacity servers with consistent workloads.

b. Least Connections

  • Directs traffic to the server with the fewest active connections.
  • Best for: Scenarios where requests vary in processing time.

c. IP Hashing

  • Routes requests from the same client IP to the same backend server.
  • Best for: Sticky sessions (e.g., shopping carts, user authentication).

d. Weighted Round Robin

  • Assigns weights to servers based on capacity, directing more traffic to powerful machines.
  • Best for: Mixed server environments with varying hardware capabilities.

5. Load Balancing in the Cloud

Cloud providers offer managed load balancers that simplify deployment and scaling. Some popular services include:

  • AWS Elastic Load Balancer (ELB)
  • Google Cloud Load Balancer
  • Azure Load Balancer

These cloud-based solutions automatically scale based on demand and integrate with monitoring tools.

6. Example: Load Balancer in a Web Application

Consider a large-scale e-commerce website with millions of users. A typical architecture might include:

  1. Client Requests → Users access the website.
  2. Load Balancer → Distributes requests among multiple application servers.
  3. Application Servers → Process requests and interact with databases.
  4. Database Replication → Ensures redundancy and faster read operations.
  5. CDN (Content Delivery Network) → Improves performance by caching static content closer to users.

7. Challenges & Best Practices

  • Avoid Single Points of Failure: Deploy multiple load balancers in different availability zones.
  • Health Checks: Continuously monitor backend servers and reroute traffic if a server fails.
  • Session Persistence: Maintain user sessions using sticky sessions or distributed caching (Redis, Memcached).
  • Auto-Scaling Integration: Link load balancers with auto-scaling policies to dynamically adjust resources.

8. Conclusion

Load balancing is a fundamental concept in system design that ensures scalability, availability, and performance. By choosing the right load balancing strategy, companies can provide seamless user experiences and handle high traffic efficiently.

18 May, 2022

System Design Refresher

In today’s tech-driven world, designing scalable and efficient systems is crucial for building robust applications. Whether you are a software engineer, an architect, or an aspiring system designer, understanding the principles of system design can set you apart in the industry.

1. What is System Design?

System design is the process of defining the architecture, components, modules, interfaces, and data flows of a system. It involves making decisions on how different parts of an application interact to ensure scalability, reliability, and maintainability.

2. Key Concepts in System Design

a. Scalability

  • Horizontal Scaling: Adding more machines to handle increased load (e.g., adding more web servers).
  • Vertical Scaling: Increasing the power of a single machine (e.g., upgrading RAM, CPU).

b. Load Balancing

A load balancer distributes incoming requests among multiple servers to ensure smooth performance and prevent overload.

c. Caching

Caching stores frequently accessed data in memory (e.g., Redis, Memcached) to reduce database queries and speed up response times.

d. Database Design

  • SQL vs NoSQL: SQL databases (MySQL, PostgreSQL) offer structured data storage, while NoSQL (MongoDB, Cassandra) is better for unstructured and large-scale data.
  • Sharding and Replication: Techniques to distribute data across multiple servers to improve availability and performance.

e. Microservices Architecture

Breaking down a monolithic application into smaller, independent services that communicate via APIs. This improves maintainability and scalability.

3. Steps to Design a Scalable System

Step 1: Understand Requirements

Define functional and non-functional requirements. Ask questions like:

  • What is the expected traffic?
  • How much data will be stored?
  • What are the uptime and latency requirements?

Step 2: Define High-Level Architecture

Create a system diagram outlining key components:

  • Load balancers
  • Web servers
  • Application servers
  • Databases
  • Caching layers

Step 3: Choose the Right Tech Stack

Select programming languages, frameworks, databases, and cloud services based on scalability and efficiency needs.

Step 4: Handle Data Storage and Management

  • Use replication for redundancy.
  • Implement sharding for large datasets.
  • Optimize queries with indexes and caching.

Step 5: Ensure Reliability & Security

  • Use CDNs to distribute content efficiently.
  • Implement rate limiting to prevent abuse.
  • Encrypt sensitive data and use authentication mechanisms (OAuth, JWT).

4. Case Study: Designing a URL Shortener Like Bit.ly

A URL shortener is a great example of a scalable system. Key components include:

  • API Layer: Handles URL shortening and retrieval requests.
  • Database: Stores URL mappings (SQL for structured storage, NoSQL for high write/read efficiency).
  • Caching: Speeds up retrieval (e.g., Redis).
  • Load Balancing: Distributes traffic among servers.

5. Conclusion

System design is a critical skill for building scalable, efficient, and resilient applications. By understanding the core principles—scalability, caching, databases, and microservices—you can design systems that handle real-world challenges.

07 March, 2022

Service Mesh Implementation for Go Microservices

 Introduction

As microservice architectures grow in complexity, the challenges of service-to-service communication become increasingly difficult to solve at the application level. Service meshes have emerged as a powerful solution to these challenges, providing a dedicated infrastructure layer to handle network communication between services while offering features like load balancing, service discovery, traffic management, security, and observability.

Over the past year, I've been implementing and optimizing service mesh solutions for Go microservices in production environments. In this article, I'll share practical insights on implementing service meshes for Go applications, comparing popular options like Istio and Linkerd, and demonstrating how to configure and optimize them for production use.

Understanding Service Mesh Architecture

Before diving into implementation details, let's establish a clear understanding of service mesh architecture:

What is a Service Mesh?

A service mesh is a dedicated infrastructure layer for handling service-to-service communication. It's usually implemented as a set of network proxies deployed alongside application code (a pattern known as the "sidecar proxy").

Key Components

A typical service mesh consists of:

  1. Data Plane: Network proxies (sidecars) that mediate communication between services
  2. Control Plane: Central management system that configures the proxies and provides APIs
  3. Ingress/Egress Gateways: Special proxies that handle traffic entering and leaving the mesh

Core Capabilities

Service meshes typically provide:

  1. Traffic Management: Load balancing, circuit breaking, retries, timeouts
  2. Security: mTLS, authorization policies, certificate management
  3. Observability: Metrics, distributed tracing, logging
  4. Service Discovery: Automatic registration and discovery of services
  5. Resilience: Fault injection, error handling

Why Use a Service Mesh with Go Microservices?

Go is already excellent for building microservices, with its strong standard library, efficient concurrency model, and small binary sizes. However, a service mesh can still provide significant benefits:

1. Infrastructure vs. Application Logic

Without a service mesh, you'd need to implement features like retry logic, circuit breaking, and service discovery in your application code:

// Without service mesh - implementing circuit breaking in code func callUserService(ctx context.Context, userID string) (*User, error) { breaker := circuitbreaker.New( circuitbreaker.FailureThreshold(3), circuitbreaker.ResetTimeout(5 * time.Second), )

return breaker.Execute(func() (interface{}, error) {
    resp, err := httpClient.Get("http://user-service/users/" + userID)
    if err != nil {
        return nil, err
    }
    defer resp.Body.Close()
    
    if resp.StatusCode >= 500 {
        return nil, fmt.Errorf("server error: %d", resp.StatusCode)
    }
    
    var user User
    if err := json.NewDecoder(resp.Body).Decode(&user); err != nil {
        return nil, err
    }
    
    return &user, nil
})

}

With a service mesh, this becomes much simpler:

// With service mesh - let the mesh handle circuit breaking func callUserService(ctx context.Context, userID string) (*User, error) { resp, err := httpClient.Get("http://user-service/users/" + userID) if err != nil { return nil, err } defer resp.Body.Close()

var user User
if err := json.NewDecoder(resp.Body).Decode(&user); err != nil {
    return nil, err
}

return &user, nil

}

2. Consistent Policies Across Services

A service mesh ensures that policies like timeout settings, retry logic, and security configurations are applied consistently across all services, regardless of language or framework.

3. Observability Without Code Changes

Service meshes automatically collect metrics and traces without requiring changes to your application code.

Popular Service Mesh Solutions

Let's compare the most popular service mesh implementations:

Istio

Istio is a powerful, feature-rich service mesh developed by Google, IBM, and Lyft.

Pros:

  • Comprehensive feature set
  • Advanced traffic management
  • Strong security capabilities
  • Integrates with Kubernetes

Cons:

  • Complex installation and configuration
  • Higher resource overhead
  • Steeper learning curve

Linkerd

Linkerd is a lightweight, CNCF-hosted service mesh designed for simplicity and ease of use.

Pros:

  • Lighter resource footprint
  • Simpler installation and configuration
  • Focused on core service mesh features
  • Written in Rust and Go for performance

Cons:

  • Fewer features than Istio
  • Less advanced traffic management

Consul Connect

HashiCorp's Consul includes service mesh capabilities via Consul Connect.

Pros:

  • Integrated with HashiCorp ecosystem
  • Works in non-Kubernetes environments
  • Simplified architecture

Cons:

  • More limited feature set
  • Less automatic than Istio/Linkerd in Kubernetes

Implementing Istio with Go Microservices

Let's walk through the process of implementing Istio for a Go microservice architecture. We'll use a real-world example of an e-commerce application with multiple services.

Step 1: Installing Istio

First, install Istio in your Kubernetes cluster:

istioctl install --set profile=demo

This installs Istio with a configuration profile suitable for demonstration purposes. For production, you'd want to customize the installation.

Step 2: Enabling Sidecar Injection

For Istio to work, each pod needs a sidecar proxy. You can enable automatic injection by labeling your namespace:

kubectl label namespace default istio-injection=enabled

Step 3: Deploying Go Microservices

Let's deploy our Go microservices. Here's an example Kubernetes deployment for a product service:

apiVersion: apps/v1 kind: Deployment metadata: name: product-service labels: app: product-service spec: replicas: 3 selector: matchLabels: app: product-service template: metadata: labels: app: product-service spec: containers: - name: product-service image: your-registry/product-service:1.0.0 ports: - containerPort: 8080 env: - name: SERVICE_PORT value: "8080" - name: DB_HOST value: "products-db" readinessProbe: httpGet: path: /health port: 8080 initialDelaySeconds: 5 periodSeconds: 10 resources: requests: cpu: "100m" memory: "128Mi" limits: cpu: "200m" memory: "256Mi"

And a corresponding service:

apiVersion: v1 kind: Service metadata: name: product-service spec: selector: app: product-service ports:

  • port: 80 targetPort: 8080 type: ClusterIP

Step 4: Configuring Traffic Management

One of Istio's key features is traffic management. For example, to implement canary deployments, you can use a VirtualService and DestinationRule:

apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: product-service spec: hosts:

  • product-service http:
  • route:
    • destination: host: product-service subset: v1 weight: 90
    • destination: host: product-service subset: v2 weight: 10

apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: product-service spec: host: product-service subsets:

  • name: v1 labels: version: v1
  • name: v2 labels: version: v2

This configuration routes 90% of traffic to v1 and 10% to v2 of the product service.

Step 5: Implementing Security with mTLS

Istio can automatically secure service-to-service communication with mutual TLS:

apiVersion: security.istio.io/v1beta1 kind: PeerAuthentication metadata: name: default namespace: default spec: mtls: mode: STRICT

This enables strict mTLS for all services in the default namespace.

Step 6: Setting Up Resilience Patterns

Configure circuit breaking to prevent cascading failures:

apiVersion: networking.istio.io/v1alpha3 kind: DestinationRule metadata: name: product-service spec: host: product-service trafficPolicy: connectionPool: tcp: maxConnections: 100 http: http1MaxPendingRequests: 10 maxRequestsPerConnection: 10 outlierDetection: consecutiveErrors: 5 interval: 30s baseEjectionTime: 30s

This configuration limits connections and implements circuit breaking based on consecutive errors.

Step 7: Implementing Retries and Timeouts

Add retry logic and timeouts to handle transient failures:

apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: product-service spec: hosts:

  • product-service http:
  • route:
    • destination: host: product-service retries: attempts: 3 perTryTimeout: 2s timeout: 5s

This configuration attempts up to 3 retries with a 2-second timeout per attempt and a 5-second overall timeout.

Optimizing Go Services for Service Mesh

When running Go services with a service mesh, there are several optimizations to consider:

1. Health Checks and Readiness Probes

Implement comprehensive health checks to help the service mesh make accurate routing decisions:

func setupHealthChecks(router *mux.Router) { router.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) { w.WriteHeader(http.StatusOK) w.Write([]byte("OK")) }).Methods("GET")

router.HandleFunc("/ready", func(w http.ResponseWriter, r *http.Request) {
    // Check dependencies
    if !isDatabaseConnected() || !isRedisConnected() {
        w.WriteHeader(http.StatusServiceUnavailable)
        w.Write([]byte("Not ready"))
        return
    }
    
    w.WriteHeader(http.StatusOK)
    w.Write([]byte("Ready"))
}).Methods("GET")

}

2. Resource Optimization

Service meshes, especially Istio, add overhead in terms of CPU and memory usage. Optimize your Go services to be more resource-efficient:

// Use connection pooling var httpClient = &http.Client{ Transport: &http.Transport{ MaxIdleConns: 100, MaxIdleConnsPerHost: 20, IdleConnTimeout: 90 * time.Second, }, Timeout: 10 * time.Second, }

// Efficient JSON handling func respondJSON(w http.ResponseWriter, data interface{}) error { w.Header().Set("Content-Type", "application/json")

// Use json.NewEncoder for streaming response
return json.NewEncoder(w).Encode(data)

}

3. Propagating Trace Context

While service meshes handle distributed tracing automatically, you can enhance this by propagating trace context in your application:

func tracingMiddleware(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { span := opentracing.SpanFromContext(r.Context()) if span == nil { // Extract trace context from headers wireContext, err := opentracing.GlobalTracer().Extract( opentracing.HTTPHeaders, opentracing.HTTPHeadersCarrier(r.Header), )

        if err == nil {
            // Create a new span
            span = opentracing.StartSpan(
                r.URL.Path,
                opentracing.ChildOf(wireContext),
            )
            defer span.Finish()
            
            // Add the span to the context
            ctx := opentracing.ContextWithSpan(r.Context(), span)
            r = r.WithContext(ctx)
        }
    }
    
    next.ServeHTTP(w, r)
})

}

4. Graceful Shutdown

Implement graceful shutdown to ensure in-flight requests complete when the service is terminated:

func main() { // Initialize server server := &http.Server{ Addr: ":8080", Handler: setupRouter(), }

// Start server
go func() {
    if err := server.ListenAndServe(); err != nil && err != http.ErrServerClosed {
        log.Fatalf("Server error: %v", err)
    }
}()

// Wait for interrupt signal
quit := make(chan os.Signal, 1)
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
<-quit

log.Println("Shutting down server...")

// Create a deadline for shutdown
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
defer cancel()

// Attempt graceful shutdown
if err := server.Shutdown(ctx); err != nil {
    log.Fatalf("Server forced to shutdown: %v", err)
}

log.Println("Server exited properly")

}

Monitoring and Observability

A key benefit of service meshes is enhanced observability. Let's explore how to leverage this with Go services:

Metrics Collection

Istio automatically collects key metrics like request count, latency, and error rates. You can add custom metrics using Prometheus:

func prometheusMiddleware(next http.Handler) http.Handler { requestCounter := prometheus.NewCounterVec( prometheus.CounterOpts{ Name: "http_requests_total", Help: "Total number of HTTP requests", }, []string{"method", "endpoint", "status"}, )

requestDuration := prometheus.NewHistogramVec(
    prometheus.HistogramOpts{
        Name:    "http_request_duration_seconds",
        Help:    "HTTP request duration in seconds",
        Buckets: prometheus.DefBuckets,
    },
    []string{"method", "endpoint"},
)

prometheus.MustRegister(requestCounter, requestDuration)

return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
    start := time.Now()
    
    // Create a response writer wrapper to capture the status code
    wrapper := newResponseWriter(w)
    
    // Call the next handler
    next.ServeHTTP(wrapper, r)
    
    // Record metrics
    duration := time.Since(start).Seconds()
    requestCounter.WithLabelValues(r.Method, r.URL.Path, fmt.Sprintf("%d", wrapper.statusCode)).Inc()
    requestDuration.WithLabelValues(r.Method, r.URL.Path).Observe(duration)
})

}

Distributed Tracing

Istio integrates with tracing systems like Jaeger. You can enhance tracing by adding custom spans:

func handleGetProduct(w http.ResponseWriter, r *http.Request) { ctx := r.Context() productID := chi.URLParam(r, "id")

// Start a new span
span, ctx := opentracing.StartSpanFromContext(ctx, "get_product")
defer span.Finish()

span.SetTag("product.id", productID)

// Get product from database
product, err := productRepo.GetByID(ctx, productID)
if err != nil {
    span.SetTag("error", true)
    span.LogFields(
        log.String("event", "error"),
        log.String("message", err.Error()),
    )
    http.Error(w, "Product not found", http.StatusNotFound)
    return
}

// Respond with product
respondJSON(w, product)

}

Logging Integration

Structured logging integrates well with service mesh observability:

func loggingMiddleware(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { start := time.Now()

    // Extract trace and span IDs
    traceID := r.Header.Get("x-b3-traceid")
    spanID := r.Header.Get("x-b3-spanid")
    
    // Create a response writer wrapper
    wrapper := newResponseWriter(w)
    
    // Process request
    next.ServeHTTP(wrapper, r)
    
    // Log request details
    logger.Info().
        Str("method", r.Method).
        Str("path", r.URL.Path).
        Str("remote_addr", r.RemoteAddr).
        Int("status", wrapper.statusCode).
        Dur("duration", time.Since(start)).
        Str("trace_id", traceID).
        Str("span_id", spanID).
        Msg("Request processed")
})

}

Service Mesh in Production: Lessons Learned

Having implemented service meshes in production environments, here are some key lessons and best practices:

1. Resource Planning

Service meshes, especially Istio, can consume significant resources. Plan accordingly:

  • CPU Overhead: Expect 10-20% CPU overhead per pod
  • Memory Usage: The Istio sidecar typically uses 50-100MB of memory
  • Latency Impact: Expect a small latency increase (usually single-digit milliseconds)

2. Gradual Adoption

Rather than deploying a service mesh across your entire infrastructure at once, adopt it gradually:

  1. Start with non-critical services
  2. Monitor performance and resource usage
  3. Gradually expand to more services
  4. Apply advanced features incrementally

3. Optimizing Istio Installation

For production, customize Istio installation for your specific needs:

istioctl install
--set values.pilot.resources.requests.cpu=500m
--set values.pilot.resources.requests.memory=2048Mi
--set components.cni.enabled=true
--set values.global.proxy.resources.requests.cpu=100m
--set values.global.proxy.resources.requests.memory=128Mi
--set values.global.proxy.resources.limits.cpu=200m
--set values.global.proxy.resources.limits.memory=256Mi

4. Handling Upgrades

Service mesh upgrades require careful planning:

  1. Test upgrades in a staging environment first
  2. Back up Istio configuration before upgrading
  3. Consider canary upgrading the control plane
  4. Plan for potential downtime or degraded service during upgrades

5. Troubleshooting Common Issues

Some common issues we've encountered with service meshes in production:

  • 503 Errors: Often caused by timeout settings or readiness probe failures
  • Mutual TLS Issues: Certificate errors or misconfigured TLS settings
  • High Latency: Typically due to misconfigured connection pools or unnecessary proxying
  • Webhook Errors: Issues with the Istio sidecar injection webhook

6. Monitoring the Mesh Itself

Don't forget to monitor the service mesh components:

  • Control Plane Metrics: Monitor resource usage and performance
  • Data Plane Metrics: Track proxy performance and errors
  • Configuration Validation: Regularly check for configuration errors

Implementing a Service Mesh with Linkerd: A Lighter Alternative

If Istio's complexity and resource requirements are concerns, Linkerd offers a lighter alternative:

Installing Linkerd

Install the Linkerd CLI and deploy it to your cluster:

linkerd install | kubectl apply -f -

Injecting Sidecars

Like Istio, Linkerd uses sidecar injection:

kubectl get deploy -o yaml | linkerd inject - | kubectl apply -f -

Traffic Management

While less advanced than Istio, Linkerd provides essential traffic management:

apiVersion: split.smi-spec.io/v1alpha1 kind: TrafficSplit metadata: name: product-service-split spec: service: product-service backends:

  • service: product-service-v1 weight: 90
  • service: product-service-v2 weight: 10

Observability

Linkerd provides excellent observability with minimal configuration:

linkerd dashboard &

This opens a web dashboard with metrics, service topology, and traffic details.

Real-World Case Study: Migrating a Go Microservice Architecture to Istio

Let's walk through a real-world case study of migrating a Go-based microservice architecture to Istio:

The Starting Point

  • 15 Go microservices
  • Kubernetes-based deployment
  • Manual service discovery via Kubernetes Services
  • Basic load balancing via kube-proxy
  • No consistent security policy

The Migration Process

We followed these steps to migrate to Istio:

  1. Assessment and Planning:

    • Audited all services for compatibility
    • Identified potential issues (non-HTTP traffic, stateful services)
    • Created a migration plan
  2. Preparation:

    • Added health and readiness checks to all services
    • Optimized resource settings
    • Implemented graceful shutdown
  3. Initial Deployment:

    • Installed Istio in a separate namespace
    • Deployed copies of services with sidecar injection
    • Validated functionality with test traffic
  4. Testing and Validation:

    • Load tested services with sidecars
    • Monitored for errors and performance issues
    • Validated observability features
  5. Gradual Rollout:

    • Migrated one service at a time, starting with non-critical services
    • Incrementally shifted traffic to mesh-enabled services
    • Implemented advanced features (mTLS, circuit breaking) as separate steps
  6. Monitoring and Optimization:

    • Set up dashboards for service mesh metrics
    • Created alerts for service mesh issues
    • Continuously optimized mesh configuration

Results

After migrating to Istio, we observed:

  • Improved Resilience: 45% reduction in cascading failures
  • Enhanced Security: All service-to-service communication secured with mTLS
  • Better Visibility: Comprehensive service metrics and distributed tracing
  • Consistent Policies: Standardized retry, timeout, and circuit breaking across all services
  • Simplified Code: Removed boilerplate resilience code from applications

Challenges Faced

The migration wasn't without challenges:

  • Resource Consumption: Had to increase cluster size by 15%
  • Complexity: Required significant team training on Istio concepts
  • Performance Impact: Initial 10-15ms latency increase (later optimized to 5-8ms)
  • Debugging Complexity: Service issues became harder to diagnose initially

Conclusion

Service meshes offer powerful capabilities for managing communication in microservice architectures, but they come with complexity and resource costs. When implemented correctly, they can provide substantial benefits in terms of reliability, security, and observability.

For Go microservices, which are already lightweight and efficient, the decision to adopt a service mesh should carefully weigh the benefits against the added complexity and resource overhead. In many cases, the benefits outweigh the costs, especially as your architecture grows beyond a handful of services.

Key takeaways from this article:

  1. Understand Your Needs: Choose between comprehensive (Istio) vs. lightweight (Linkerd) based on your specific requirements
  2. Optimize Resources: Carefully configure proxy resources and tune the mesh for efficiency
  3. Gradual Adoption: Implement service mesh incrementally rather than all at once
  4. Enhance Observability: Leverage the mesh's telemetry capabilities with proper instrumentation
  5. Simplify Application Code: Move cross-cutting concerns to the mesh where appropriate

In future articles, I'll explore more advanced topics such as multi-cluster service meshes, mesh federation, and integrating service meshes with API gateways and event-driven architectures.


About the author: I'm a software engineer with experience in systems programming and distributed systems. Over the past years, I've been designing and implementing distributed systems in Go, with a focus on microservices, service mesh technologies, and cloud-native architectures.