Increasing Importance of Integration Architecture

Thinking Event-driven Integration first

Integration architecture/design has been long hailed by the industry to bring together various systems to build an unified business capability. These systems could be frontend, backend, database; and components within backend/frontend/database to interact and communicate with each other. Be it the application or data, the integration principles and the components has stayed same since its evolution; Channels, Systems, Communication protocols, payload design, Control routing, Endpoints, Payload transformations etc. There are plenty of articles by software giants to get down to a bare bone implementations on these integration systems. What we will discuss in this article is the importance of Integration in todays world, what cloud platforms brings in to deliver the same and the frameworks (Open-source or Custom built) that can be used to build such implementation with ease and elegance.

Application Build : Where are we heading

In today’s world, more than ever, delivering a business capability needs various tools/products/frameworks/systems to come together. Right from Security to User Interface; all the general capabilities are offered by very matured solutions residing across various platforms needing a very robust integration design. In short, the application building is becoming more like a LEGO where we put capabilities from various tools together to create a competitive production grade application.

"The application building is becoming more like a LEGO where we put capabilities from various tools together to create a competitive production grade application."

This pushed the integration design beyond frontend/backend/database to various services/solutions - all the way to conceiving the entire application by stitching various powerful products/services.

Capability abstracted to an API Call

Cognitive Solutions like Speech to text, Language Translation and specific AI models (Ex: Sentiment Analysis) are offered out of the box where integrating few endpoints together can provide end-user solutions that was once considered to be a multi-year platform build. Even fundamental services are security (Auth-N/Auth-Z), Search (Type-ahead, weighted) and Application Monitoring are offered by products and framework which made the entire application development seamless.

"Even fundamental services like security, search and Application Monitoring are offered by products and framework which made the entire application development seamless."

API management solutions in the past decade pushed the conception of any product/framework build to expose an (RESTFUL) API. This abstracted the internal working of the system into a simple RESTFUL call where a payload is sent in and a payload is received as a response. (Subject to Security, Latency, Resiliency, Callback and Reconciliation protocols - While in principle, the remain the same since its inception)

A lateral approach to an application build

While we have various solutions and its integrable feature on the table, the next question is how to go about to put the solutions together. As stated above, the principle of Integration remained the same since it reached maturity with concepts(and their options) around the Channels, Systems, Communication protocols, Communication payload design, Control routing, Endpoints, Payload transformations. With the advent of various open source and cloud PaaS solutions, an application design has to start from conceiving the Integration Design first. Be it the Data integration (Federated to Central Integrated Data store) or Application (Service) integration - building a message communication platform by leveraging matured events framework will enable the system to harness the best solutions available across various tools/solutions and make it stay relevant for years to come as any extension to replace/expand any solution will be yet another API integration, in principle.

"With the advent of various open source and cloud PaaS solutions, an application design has to start from conceiving the Integration Design first; be it Application Integration or Data Integration."

Event Driven Architecture

To start thinking of an integration design has to start from a business use case thought with a different frame of mind. Let’s take a retail shopping and order processing system in a classical sense; if we were to build such a system ground-up in a way it can scale to handle 100s of Transactions Per Second, we will naturally think this as a micro service design with various services build to scale up/down orchestrated in a container/VM. But then, if the systems were to interact with various services outside the scope of our system (Ex: Payment to be processed by a 3rd party, shipment from another integrated …) beyond the microservice proving the scalability, the ways the microservices communicate will have to be baked into the design. While API management manages the throttling, security and auditing; the knitting of those API in a scalable way needs a way of thinking more than just microservices design.

"To start thinking of an integration design has to start from a business use case thought with a different frame of mind. While API management manages the throttling, security and auditing; the knitting of those API in a scalable way needs a way of thinking more than just microservices design."

At the heart of it, the design principle for an Event Framework build is that, every event transpires a set of further action(s) (a.k.a Services) to be invoked which results in corresponding event(s) which further transpires additional action(s) (a.k.a Services) and so on...

Event Driven Architecture


Messaging Framework:

Having said that the principles of Integration architecture remains the same, Event Driven Architecture can be implemented by leveraging a scale messaging framework and building some capabilities around it. In case of on-premise this could be Kafka; and in case of cloud this could be Kinesis/EventHub/PubSub … (AWS/Azure/GCP …) This provides a scalable, resilient, low-latent message communication. This will be the bed rock of building Event Driven Architecture.

Services:

These are the processes that are acting upon the message being delivered in the messaging framework. This could be a serverless function, a microservice, a database persistence or an old school monolithic application with listeners looking for a trigger. These are capable of being invoked upon a message arriving and produce a message based on an outcome.

Producer/Consumer Classification:

This design aspect has the KEY to access the messaging framework; as to which services can write and read messages from the messaging framework. Beyond just the security/access management, this configuration lets the Event Driven platform to chain complex services to deliver an end-to-end business capability (like retail shopping and order processing system).

Message Payload Standardization:

This is so critical, as this is the one that transforms a very power messaging platform into an intelligent Event Driven solution. When scaling the application to production-grade; beyond the volume of data, there will be numerous service reading and writing messages to the messaging framework. Hence categorizing the messages and defining a method of madness is essential to the build and sustain the Event Driven platform.

"Message payload standardization is so critical, as this is the one that transforms a powerful messaging platform into an intelligent Event Driven solution."

Based on our experience, we have seen that establishing the message headers (UUID, category, sender, timestamp, system/application/service name etc….) and governing it tightly and governing the message body (API load stored in a swagger) loosely gives the best balance between speed and disciple. (As we all know that speed and agility rarely goes hand in hand)

Monitoring Framework:

The message framework chosen above gives standard inbuilt monitoring (Throughput, Security, Error/warning, Volumetrics). But for the purpose we leveraged the messaging framework, we need more fine-grained monitoring. Information like messages that came in, consuming services, time it took to process a message, failure to conform the message received/processed by a service etc., becomes essential to record on this Event Driven platform to run production grade transactional systems.

Extent to which we need to use the Messaging Framework

Fundamentally, there are 2 designs in terms of how much to use the Messaging Framework. Assume that we received an order and there are 10 processes to be executed. (Inventory check, shipment invocation …… sending ouch notification to user) We can (1) either have 10 services built out and let each service to consume message from the predecessor service (From the Messaging Framework) and put the message back into the Messaging Framework for the Successor service to consume. Alternatively, (2) we can consume the event that an order has been placed by a user from the Messaging Framework and then have a ‘Mediator’ service take the message from one service to other and orchestrate all 10 service or (3) take a mid way.

While there isn’t a right answer, the trade-off is that, if go with option:1, the Messaging Framework take care of most of the resiliency and scale; in case of option:2, the mediator has to be designed to manage those.

"While there isn’t a right answer, the trade-off is either to let the Messaging Framework take care of most of the resiliency and scale OR let the mediator to manage those."

Tightly-knit data integration

Data integration (Real-time data integration taking data events) is usually better to be done by closely knitting the data service. For example, assume that we were to get customer data from various systems real-time and that we need to intake those events perform typical data integration services/processes: Data Validation, Data transformation, Data Normalization etc. and persist the data into an integration layer. While we can technical design all the services as standalone service and read-write from a message framework, its better to knit them closely via data processing frameworks. For example, data processing frameworks like Spark-Streaming provides a very scalable, resilient platform through which we can consume, transform and persist millions of records while all side-car services are provided by the framework out of the box. This helps to reduce the latency by avoiding the time we take to produce/consume the event in messaging framework rather let the Spark-Streaming handshake the data. Irrespective of the integration implementation, the data integration services/processes can be designed as a standalone function and kicked-off by invoking the spark streaming context.

"Irrespective of the integration implementation, processes like data integration can be designed as a standalone function and kicked-off by invoking the spark streaming context."

Loosely-knit application service integration

To over simply the scope here, let's assume everything than Core Data Processing as 'Application Service'. This could be an API call to send a notification or threat detection or speech to text conversion. As a general rule of thumb, it is better to knit them loosely; be it option (1) or (2) described above. This is because, unlike a Data Processing platform, these system are distributed across various hardware and its essential to throttle and manage resilience outside of the respective systems. Lately, the line between the Data and Application services integration are narrowing, frameworks 'AKKA' are recently gaining momentum to offer a hybrid solution that could fit solution rating from Data to Application integration.

"As a general rule of thumb, it is better to knit Application services loosely. Lately, the line between the Data and Application services integration are narrowing. Frameworks 'AKKA' are recently gaining momentum to offer a hybrid solution that could fit solution rating from Data to Application integration."

Security

With more touchpoint comes more point for intrusion. And hence, security becomes even more important to protect various service endpoints. This is implemented by multi-level redundant security. At a high level, the services/endpoints are to be categorized as internal and external. External endpoint will have a public endpoint while all the internal endpoints are placed on a seperate VNET with no external endpoint to provide a perimeter security. Any service call needs to be validated by establishing a SSL trust between 2 parties and a temporary token is exchanged between them to validate the authenticity. Over and above that, any PII data will have to go through the data encryption protocol to ensure that the data is encrypted at motion and at rest.

"With more touchpoint comes more point for intrusion. And hence, multi-level redudent security becomes even more important to protect various service endpoints."

In Summary

With the adoption of various cloud platforms, the marketplace offering of various niche services as a standalone service and the general adoption of RESTful API standards by everyone; it will be a competitive disadvantage for any kind of a product build NOT to leverage these best in class solutions. Hence the boundaries of integration architecture is pushed far wider and deeper than what it was ~5 yrs back and thinking of Integration architecture as another architecture pillar is essential to any application/product/platform design.