Communication — Service to Service

vishal gupta
8 min readOct 22, 2020

--

Service-to-service communication Patterns

Queries

Query — Many times, one microservice might need to query another

There are a number of approaches for implementing query operations.

  1. Request/Response Messaging

As coupling among microservices increase, their architectural benefits diminish.

2. Materialized View pattern — Very Imp

With this pattern, a microservice stores its own local, denormalized copy of data that’s owned by other services.

Instead of the Shopping Basket microservice querying the Product Catalog and Pricing microservices, it maintains its own local copy of that data. This pattern eliminates unnecessary coupling and improves reliability and response time.

3. Request/Reply Pattern

Another approach for decoupling synchronous HTTP messages is a Request-Reply Pattern, which uses queuing communication.

Communication using a queue is always a one-way channel, with a producer sending the message and consumer receiving it.

With this pattern, both a request queue and response queue are implemented,

Commands

Command — when the calling microservice needs another microservice to execute an action but doesn’t require a response, such as, “Hey, just ship this order.” (fire-and-forget the message.)

Queues implement an asynchronous, point-to-point messaging pattern. In this scenario, either the producer or consumer service can scale out without affecting the other.

Azure cloud supports two types of message: Azure Storage Queues and Azure Service Bus Queues.

1. Azure Storage Queues They provide a minimal feature set, but are inexpensive and store millions of messages.

a. Message order isn’t guaranteed.

b. A message can only persist for seven days before it’s automatically removed.

c. Support for state management, duplicate detection, or transactions isn’t available.

2. Azure Service Bus supports a brokered messaging model.

a. The queue guarantees First-In/First-Out (FIFO) message delivery, respecting the order in which messages were added to the queue.

b. Messages are reliably stored in a broker (the queue) until received by the consumer. The queue guarantees “at most once delivery” per message. It automatically discards a message that has already been sent.

c. Transaction support and a duplicate detection feature.

d. Two more enterprise features are partitioning and sessions.

• Service Bus Partitioning spreads the queue across multiple message brokers and message stores. The overall throughput is no longer limited by the performance of a single message broker or messaging store.

• Service Bus Sessions provide a way to group-related messages. Imagine a workflow scenario where messages must be processed together and the operation completed at the end. To take advantage, sessions must be explicitly enabled for the queue and each related messaged must contain the same session ID

Events

Message queuing is an effective way to implement communication where a producer can asynchronously send a consumer a message. A dedicated message queue for each consumer wouldn’t scale well and would become difficult to manage.

Eventing is a two-step process.

You use the Publish/Subscribe pattern to implement event-based communication.

With eventing, we move from queuing technology to topics. A topic is similar to a queue, but supports a one-to-many messaging pattern.

Publishers send messages to the topic. At the end, subscribers receive messages from subscriptions. In the middle, the topic forwards messages to subscriptions based on a set of rules,

“CreateOrder” event would be sent to Subscription #1 and Subscription #3, but not to Subscription #2. An “OrderCompleted” event would be sent to Subscription #2 and Subscription #3.

Azure Service Bus Topics

Sitting on top of the same robust brokered message model of Azure Service Bus queues are Azure Service Bus Topics.

A topic can receive messages from multiple independent publishers and send messages to up to 2,000 subscribers. Subscriptions can be dynamically added or removed at runtime without stopping the system or recreating the topic.

Many advanced features from Azure Service Bus queues are also available for topics, including

· Duplicate Detection

· Transaction support.

· Scheduled Message — Delivery tags a message with a specific time for processing. The message won’t appear in the topic before that time

Azure Event Grid

Service Bus implements an older style pull model in which the downstream subscriber actively polls the topic subscription for new messages

EventGrid

1. Implements a push model in which events are sent to the EventHandlers as received, giving near real-time event delivery.

That said, an event handler must handle the incoming load and provide throttling mechanisms to protect itself from becoming overwhelmed. Many Azure services that consume these events, such as Azure Functions and Logic Apps provide automatic autoscaling capabilities to handle increased loads.

2. It also reduces cost as the service is triggered only when it’s needed to consume an event — not continually as with polling.

Unlike Azure Service Bus, Event Grid is tuned for fast performance and doesn’t support features like ordered messaging, transactions, and sessions.

Streaming messages in the Azure cloud

Azure Service Bus and Event Grid provide great support for applications that expose single, discrete events like a new document has been inserted into a Cosmos DB.

But, what if your cloud-native system needs to process a stream of related events? Event streams are more complex. They’re typically time-ordered, interrelated, and must be processed as a group.

Azure Event Hub

1. is a data streaming platform and event ingestion service that collects, transforms, and stores events.

2. Event Hub supports low latency and configurable time retention. Events stored in event hub are only deleted upon expiration of the retention period, which is one day by default, but configurable

3. Event Hub supports common event publishing protocols including HTTPS and AMQP. It also supports Kafka 1.0

Unlike queues and topics, Event Hubs keep event data after it’s been read by a consumer. This feature enables other data analytic services, both internal and external, to replay the data for further analysis.

Service Mesh communication infrastructure

We explored different approaches for implementing synchronous HTTP communication and asynchronous messaging.

In each of the cases, the developer is burdened with implementing communication code. Communication code is complex and time intensive.

A more modern approach to microservice communication centers around a new and rapidly evolving technology entitled Service Mesh.

1. A service mesh is a configurable infrastructure layer with built-in capabilities to handle service-to-service communication, resiliency, and many cross-cutting concerns.

2. It moves the responsibility for these concerns out of the microservices and into service mesh layer. Communication is abstracted away from your microservices.

3. Sidecar pattern -. In a cloud-native application, an instance of a proxy is typically co-located with each microservice. While they execute in separate processes, the two are closely linked and share the same lifecycle.

Messages are intercepted by a proxy that runs alongside each microservice. Each proxy can be configured with traffic rules specific to the microservice. It understands messages and can route them across your services and the outside world.

4. A service mesh typically integrates with a container orchestrator — Kubernetes

Benefits -

1. Managing service-to-service communication

2. Provides support for service discovery and

3. Load balancing.

Database-per-microservice

In many ways, a single database

1. keeps data management simple.

2. Querying data across multiple tables is straightforward.

3. Changes to data update together or they all rollback.

4. ACID transactions guarantee strong and immediate consistency.

Why?

This database per microservice provides many benefits, especially for systems that must evolve rapidly and support massive scale. With this model -

1. Domain data is encapsulated within the service

2. Data schema can evolve without directly impacting other services

3. Each data store can independently scale

4. A data store failure in one service won’t directly impact other services

5. Segregating data also enables each microservice to implement the data store type that is best optimized for its workload, storage needs, and read/write patterns. Choices include relational, document, key-value, and even graph-based data stores.

Example — In the previous figure how each microservice supports a different type of data store.

The product catalog microservice consumes a relational database to accommodate the rich relational structure of its underlying data.

The shopping cart microservice consumes a distributed cache that supports its simple, key-value data store.

The ordering microservice consumes both a NoSql document database for write operations along with a highly denormalized key/value store to accommodate high-volumes of read operations.

Cross-service queries

While microservices are independent and focus on specific functional capabilities, like inventory, shipping, or ordering, they frequently require integration with other microservices.

How can the shopping basket microservice add a product to the user’s shopping basket when it doesn’t have product nor pricing data in its database?

Instead, a widely accepted pattern for removing cross-service dependencies is the Materialized View Pattern,

1. With this pattern, you place a local data table (known as a read model) in the shopping basket service.

This table contains a denormalized copy of the data needed from the product and pricing microservices. Copying the data directly into the shopping basket microservice eliminates the need for expensive cross-service calls. With the data local to the service, you improve the service’s response time and reliability. Additionally, having its own copy of the data makes the shopping basket service more resilient.

2. The catch with this approach is that you now have duplicate data in your system.

However, strategically duplicating data in cloud-native systems is an established practice and not considered an anti-pattern, or bad practice. Keep in mind that one and only one service can own a data set and have authority over it. You’ll need to synchronize the read models when the system of record is updated. Synchronization is typically implemented via asynchronous messaging with a publish/subscribe pattern.

Distributed transactions — Saga Pattern

While querying data across microservices is difficult, implementing a transaction across several microservices is even more complex

You move from a world of immediate consistency to that of eventual consistency’

A popular pattern for adding distributed transactional support is the Saga pattern.

1. It’s implemented by grouping local transactions together programmatically and sequentially invoking each one. If any of the local transactions fail, the Saga aborts the operation and invokes a set of compensating transactions. The compensating transactions undo the changes made by the preceding local transactions and restore data consistency.

2. Saga patterns are typically choreographed as a series of related events, or orchestrated as a set of related commands.

3. Service aggregator pattern that would be the foundation for an orchestrated saga implementation.

High volume data

Large cloud-native applications often support high-volume data requirements. In these scenarios, traditional data storage techniques can cause bottlenecks.

For complex systems that deploy on a large scale, both Command and Query Responsibility Segregation (CQRS) and Event Sourcing may improve application performance.

CQRS

1. CQRS, is an architectural pattern that can help maximize performance, scalability, and security.

2. The pattern separates operations that read data from those operations that write data.

a. To improve performance, the read operation could query against a highly denormalized representation of the data to avoid expensive repetitive table joins and table locks.

b.

The write operation, known as a command, would update against a fully normalized representation of the data that would guarantee consistency. Typically, whenever the write table is modified, it publishes an event that replicates the modification to the read table.

c. This separation enables reads and writes to scale independently.

d. Read operations use a schema optimized for queries, while the writes use a schema optimized for updates.

e. Read queries go against denormalized data, while complex business logic can be applied to the write model.

f. As well, you might impose tighter security on write operations than those exposing reads.

Implementing CQRS can improve application performance for cloud-native services. However, it does result in a more complex design.

Event sourcing — Another approach to optimizing high volume data scenarios.

A system typically stores the current state of a data entity. If a user changes their phone number, for example, the customer record is updated with the new number. We always know the current state of a data entity, but each update overwrites the previous state.

In most cases, this model works fine.

In high volume systems, however, overhead from transactional locking and frequent update operations can impact database performance, responsiveness, and limit scalability.

Event Sourcing takes a different approach to capturing data.

Each operation that affects data is persisted to an event store. With this approach, you maintain history. You know not only the current state of an entity, but also how you reached this state.

Data store that directly supports event sourcing — Azure Cosmos DB, MongoDB, Cassandra, CouchDB, and RavenDB are good candidates.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

vishal gupta
vishal gupta

Written by vishal gupta

Software Architect, Author, Trainer

No responses yet

Write a response