#6- CQRS & Event Sourcing- how technology is changing the logistics industry!


INTRODUCTION

Mozart is an experience software architect with an impressive track record in architecture.
Currenly performing a key role as Director of Software Architecture for e2log who have created a next generation global logisitics platform.
We discussed the business, new technology and an extensive insight into CQRS and Event Sourcing!

VINTAGE: Can you summarise the great work you are doing within the logistics sector with new technology at the minute?

MB: Sure thing.
Surprisingly enough even in this day and age where a car can drive itself to move people from A to B, many logistics departments of large corporations operate with very rudimentary processes to move cargo around the world, still using emails, fax and spreadsheets as their main tools.

So we are building a logistics platform that allows any company to digitize and transform their logistics processes in just a couple of months of implementation. Our platform comes to add agility, transparency and cost savings to our customers.

Our platform was born on the cloud, we have built a highly scalable B2B SaaS solution based on microservices on top of Kubernetes running on AWS cloud. Our platform benefits from modern architectural patterns like CQRS/ES and efficient service to service communication protocols.

VINTAGE: What are the challenges of service to service communications?

MB:
This is actually one of the main pain points when adopting a microservices architecture where you are forced to trade safe local intermodule calls for calls across the network, which bring all kinds of faults and complexities.

I could talk a whole hour about this question alone but will try to highlight some important considerations to take into account.

Challenge number 1 is the reliability, how can systems implement a fault tolerant protocol ? This involves things like retry mechanisms, timeouts and load balancing to name a few things you have to have in order to guarantee a reliable communication.

Not sure if you have heard of the 8 fallacies of distributed systems,
Some of those aren’t much of a concern these days anymore given all the current advances in networking performance and redundancies, but fallacy number 2, which is: Latency is zero; is still a concern in any distributed system today, and with that I’d say the challenge number 2 is sustained low latency to always delight the customers with a snappy customer experience.

Another challenge is how the protocols can evolve with time without breaking the API contracts.
Most of the existing applications implement service to service protocols based on REST/JSON, while I still think this is a valid choice for communicating with external systems and for backend to web app communication, I don’t see this as the best choice for internal service to service and I’ll try to explain why.

This combination is easy to implement but like everything else in software architecture, there are trade offs, one problem with JSON is that it is schemaless, I understand that conceptually its a good thing to follow the Postel’s law that says “Be conservative with what you send and liberal with what you acceptâ€, but I don’t see any advantage in being too liberal with internal protocols.
Another issue is that it is not optimized for low latency, JSON payloads are text based and many times they tend to be larger than they need to be and also use much more system memory to be serialized and deserialized when compared to binary formats .

One of the challenges of REST is that it only provides synchronous protocol, meaning you make a REST call and wait for an immediate answer, another issue is that your operation names might not match any of the HTTP verbs and now you will to have enforce a combination of some HTTP verb + endpoint “naming†strategy to overcome this limitation, otherwise there’s a lack of consistency and the APIs become hard to use.

For us, we currently value type safety and we have learned that some service to service calls work better as asynchronous calls, so the REST/JSON doesn’t really buy us much for internal calls, however I’m not saying I’m against REST/JSON, we still use REST/JSON for web app to backend platform and are happy with that for now, although we may look at GraphQL for that if we see fit.

– Inter service collaboration can be synchronous or asynchronous depending on the use case, some processes are asynchronous by nature, so choosing a protocol should give you good APIs to support both types.

In our case we have been implementing gRpc for all internal calls, gRpc resolves many of the limitations of the REST/JSON cited above without losing flexibility. Some of the features we value are support for multiple programming languages, messages following a schema that is easy to evolve, more efficient payload on the network, easily converts to JSON when needed, has built in support for client streaming APIs and server side streaming, all this with much lower latency by using protobuf payloads over HTTP/2 transport.

VINTAGE: Can you provide insight into CQRS and Event Sourcing?

MB:
CQRS stands for Command Query Responsibility Segregation, it’s a pattern regarding application data storage and retrieval, normally applications write a record to a designated table in the database and read from that table when the data is requested by a query.
In CQRS, the write layer is separate from the read layer and there are 2 distinct, fit for purpose data storage objects instead of just 1 table. The first object is modeled in a specialized format for the write queries (like inserts and updates in SQL databases).

The second one is a data object derived from the first one and is shaped in a way that is optimized for the read queries, like SQL select statements. The read model could be a view, a materialized view or even stored in a totally different database engine.
The write model could be an append only data storage like Kafka for instance, the choice of the persistence technology will vary with the use cases, but what is important is that they are not coupled and can use different stacks. One challenge there is to keep the read model consistent with the write model.

One advantage of CRQS is that the read queries are much simpler to write and perform much better because there’s not a need to join multiple tables to get the result, the read object already has all the fields molded together. Additionally the read queries could connect to read only database replicas reducing the load on the master transactional write instance.

Event Sourcing is a pattern where application state is maintained by only storing state transitions instead of the final state, all data is immutable, the state transitions are modeled as events, the sequence of events are saved to an Event Store component and the current state is computed by playing all the events in the sequence that they occurred.

As an example, if we were to use ES for a banking application, you could have the sequence of events for an account as:
1 – Account Deposited US $ 500
2 – Account withdrew $ 20

So the current state of the balance will be the computation of all deposit and withdrawal events to get to the final balance instead of having an account table with a balance field only.
1 – Balance computation = + 500
2 – Balance computation = +500 – 20

Now imagine using this concept for anything that changes state in the system, that’s what Event Sourcing is.

When ES is used together with CQRS, ES becomes the write persistence mechanism and every time an event is emitted, the read model would react to the event computing and storing the updated state in a new version of the read model with the current state ready to be consumed by queries of the read layer.

VINTAGE: How would event sourcing benefit a logistics organisation?

MB:
When packages are moved around the globe it can traverse multiple geographies, each with their specific regulations, diverse transportation modes like road and ocean and change hands from one service provider to the next, so keeping track of the entire journey of a package until its destination is a very important function to any logistic operation.

It turns out that using event sourcing is an excellent solution for this use case because all the facts about what happened to the package are preserved with the event history and thus becomes a free audit log.

Our application takes advantage of ES by providing rich views of the package journey, where we can delight the customers with things like tracking where their package currently is and showing previous steps along the way, or for example, looking back at a delivered package and seeing what was paid for by each service provider, verifying if the cargo was ever delayed and what event or service provider caused the delay and things like that.

VINTAGE: What blogs, articles, podcasts do you follow for extra learning?

MB:

Books I recently read:
– 37 things one Architect Knows by Grefor Hohpe
– Domain Driven Design Distilled by Vaughn Vernon

Book I worked as a Technical Reviewer:
– Modular Programming in Java 9 by Koushik Kothagal

Last public Hackathon I participated:
– 2018 hack.summit(“blockchainâ€)

Articles
– InfoQ
– DZone
– Started following your blog recently

Software Engineering Daily podcast

Twitter:
– Martin Kleppmann @martinkl
– @KentBeck
– @VaughnVernon
– @ChristianPosta
– @SamNewman
– @RealGeneKim
– @headinthebox
– @normanmaurer
– @_JamesWard

VINTAGE:
Really interesting work you are doing Mozart. I appreciate the time you have taken to provide insight into the new technology and how it is impacting the logistics sector. Good luck!

Related Blogs

Newsletter Powered By : XYZScripts.com