A good API architecture is important in effectively handling the communication between microservices. Don't be afraid to create new microservices, and try to decouple functionality as much as possible. e.g, instead of creating one notification service try to create separate microservices for email notification, SMS notification, and mobile push notifications.
Here I’m assuming that you have an API gateway in place that manages the requests, handles routing to load balancing servers, and restricts unauthorized access.
- Synchronous protocol: HTTP is a synchronous protocol. The client sends a request and waits for a response from the service. That’s independent of the client code execution that could be synchronous (thread is blocked) or asynchronous (thread isn’t blocked, and the response will reach a callback eventually). The important point here is that the protocol (HTTP/HTTPS) is synchronous and the client code can only continue its task when it receives the HTTP server response.
- Asynchronous protocol: Other protocols like AMQP (a protocol supported by many operating systems and cloud environments) use asynchronous messages. The client code or message sender usually doesn’t wait for a response. It just sends the message to a message broker service e.g, RabbitMQ or Kafka (if we’re using event-driven architecture).
Why you should avoid Synchronous protocol
- If you keep on adding new microservices that are communicating with each other then consuming endpoints within code will create a mess especially when you have to pass extra information in the endpoint. e.g, auth token.
- You have to wait for the time-consuming calls to get a response.
- If the response fails and you have a retry policy in place then it can create a bottleneck.
- If receiver service is down or not able to process the request then we want to wait until the service is up. e.g, In e-commerce site, user places an order and request is sent to shipment service to ship the order, but shipment service is down and we lost the order. How to send same order to shipment service once it's up?
- The receiver might not be able to handle a lot of requests at a time, so there should be a place where requests have to wait until the receiver is ready to process the next request.
To tackle these challenges, we can use an intermediate service that handles communication between two microservices, also known as “message broker”.
RabbitMQ is widely used as message broker service or you can also use Azure service bus if you have Azure cloud as your hosting provider.
How to use RabbitMQ to handle communication between microservices
There can be a scenario where sender wants to send message to multiple services. Let's see the below image of how RabbitMQ handles that.
When a publisher sends a message, it's received by Exchange, and then Exchange sends it to the target queues. The message remains in queue until receiver has received and processed it.
Direct exchange delivers messages to queues based on the message routing key. This is the default exchange type.
Fanout exchange delivers messages to all queues.
Header Exchange identifies the target queue based on message headers.
Topic exchange is similar to direct exchange, but the routing is done according to the routing pattern. Instead of using a fixed routing key, it uses wildcards.
For example, let’s suppose we have following routing patterns.
A routing pattern of “order.*.*.electronics” only match routing keys where the first word is “order” and the fourth word is “electronics”.
A routing pattern of “order.logs.customer.#” matches any routing keys beginning with “order.logs.customer “.
Follow this link to install RabbitMQ on windows. After installation RabbitMQ service will be up and running on http://localhost:15672/. Enter ‘‘guest’’ in username and password to log in, and you’ll be able to see all statics.
Create sender service
Once RabbitMQ is up and running, create two console applications
- Sender: To send messages to RabbitMQ
- Receiver: To receive messages from RabbitMQ
Add package “RabbitMQ.Client” to both applications.
On running sender and receiver applications, you’ll be able to see a queue created on the RabbitMQ portal, and a spike on the graph showing that a new message is received. From portal, you’ll be able to see which service has pending messages to process and you can add another instance of that service for load balancing.
In start you can work with rabbitMQ and things will go smoothly. But when complexity is increased and you have a lot of endpoints calling other services then it can create a mess. Pretty soon, you’ll find yourself creating a wrapper around the driver, so that you can reduce the amount of code you need to write. e.g, every time you call an endpoint of another service you have to provide auth token. Then you’ll find yourself needing to deal with ack vs nack, and you’ll create a simple API for that. Eventually, you’ll want to deal with poison messages — messages that are malformed and causing exceptions.
To deal with all these workflows you can use NserviceBus. Let’s discuss a project structure:
Considering this architecture, the ClientUI endpoint sends a PlaceOrder command to the Sales endpoint. As a result, the Sales endpoint will publish an OrderPlaced event using the publish/subscribe pattern, which will be received by the Billing endpoint.
Then send a message using IMessageSession object:
Finally, add a handler to receive and process message:
This is the basic implementation of NserviceBus and RabbitMQ.
Avoid Synchronous protocol while communicating between services. Use RabbitMQ to communicate between services and to save messages temporarily before they’re delivered from source to destination. Use NserviceBus to decouple application code and message broker, and to manage long-running requests.