When complexity calls
There comes a certain point in an ecosystem, when further expanding and later operating and maintaining interoperability becomes a nightmare of complexity for each system involved in the communication. All of this is due to complexity of not only the specific business domain which dictates functional requirements for each of those systems but also due to the communication overhead, that forces the systems to manage connections, various API contracts, data access logic, or credentials. All of this builds up to a hefty load of work, shifting the balance of time consumed on operations and development, especially if the system is developed and maintained by the same team. This can further lead to bottlenecks in project development or complete stop in further growth as the team will be busy with solving production issues. This is where we can find help by understanding a law postulated in the mid-1980s.
“Every application must have an inherent amount of irreducible complexity. The only question is who will have to deal with it.”
- - Law of conversion of complexity (Tesler’s law)
While Larry Tesler was mostly concerned over the interactions between the software and its users and that is why his law is mostly known in the UX discipline, there is a very important lesson we can learn from it in terms of interoperability as a form of interaction between two different systems.
We can move most of the communication complexity away from the domain systems. It will not be reduced, but managed elsewhere.
“Complexity is not bad. It’s confusion that’s bad. Forget about simplicity; long live well-managed complexity.”
- - Don Norman, author of The Design of Everyday Things.
Complexity managed by mediation
If we assume that complexity can only be moved, not reduced, then we can certainly move the complexity of communication and reduce it to the bare minimum for all participants. The easiest way to do so is to introduce a Broker (a.k.a. a Mediator).
If we look at those definitions from a perspective of IT ecosystems, an integration broker in a Broker Architecture acts as an intermediary between two parties that have differing requirements. The key word here is ‘differing’, as this is where the complexity hides. Communication is fairly simple if everyone speaks the same language and has the exact same definition of every single word. This translates to applications being developed in the same coding languages, having data models, represented by data in a specific format like JSON or XML and lastly managing the semantic coupling on the level of data. Unfortunately this kind of situation is rarely the case in the modern IT landscape, and this is where an integration broker and broker architecture come into play.
What is Broker Architecture
Broker Architecture is an ecosystem architectural style that introduces an infrastructural component, the integration broker, into the IT ecosystem, that is responsible for handling communication complexity, that is:
- systems using different communication protocols,
- various contract data models,
- mismatched data formats,
- orchestration, observability, abstraction or extensibility capabilities.
It inherits certain qualities from the previous architectural styles and builds new ones on top of those already existing. The use of point-to-point communication now shifts from system calling each other to invoking services exposed by the integration broker that orchestrates communication. On top of that we gain new capabilities, such as the ability to initiate the communication completely externally of the business systems by means of schedulers or change data capture mechanisms. Lastly the integration flows in an integration broker have a workflow nature that is usually tailored to the business process requirements which it needs to support.
Qualitative Analysis
As we did with Point-to-point and Event-Driven Architecture , let’s explore the qualities of Broker Architecture and look into a few pitfalls that can be crucial to consider when trying to apply this architectural style. For that we will be using a comparison table that was produced through a qualitative analysis of architectural styles taking several architectural characteristics into account. If you would like to learn more about this analysis or read how we define those characteristics, you can do so by reading this article , where we explain how this comparison was created.
Cost analysis
Considering how Broker Architecture changes the landscape in comparison to Point-to-point or Event-Driven Architecture, it is clear that the cost overall will be higher in comparison. As we stated at the beginning of this article, we are moving complexity away from the domain systems, which means, we are moving development and operational cost to an integration broker. This comes with a need to hire or retrain developers to have the right skill onboard. Next to that, the business systems development team can work protocol agnostic, as the integration broker is capable of providing a protocol that is best suited to the business system needs and technology used to build it. This limits the sprawl of development costs to an extent and lands Broker Architecture at 2.
Operational costs in Broker Architecture are similar to EDA (3), mostly due to the effect of moved complexity. There is the cost of licensing, which can influence the overall cost as it will vary between different technology providers and their licensing models that depend on several factors (e.g. usage, deployment mode). The operational effort on environments and operating the integration broker itself can be partially compensated within the technology licenses if an IPaaS technology is chosen.
Architectural Change Cost is a bit higher for the Broker Architecture (3) due to the fact that it is a completely separate system with a distinct workflow nature. All integration flows are dedicated, which means they are more coupled to the business systems and do not provide as much abstraction as is possible. This means that while changes might be entirely contained within the integration broker, they will impact the whole integration flow. The changed business system needs to be deployed with it as a single deployment unit.
Architectural and design time analysis
Moving on to architectural and design time qualities of Broker Architecture, we can see from the qualitative analysis that it is a well-rounded architecture having average scores in all characteristics in this category. Let’s start with the one that scores the lowest - composability. The reason why it is scored with only 2 is because of the integration broker workflow nature. The reusability in each integration flow is mostly limited to code, e.g. reusable wrappers, connectors, perhaps some standardized mapping functions or transformations, but it is rarely feasible to create reusable services, hence it is harder to use business systems and integration flows as composable building blocks of the ecosystem. Removability, as part of composability, is a bit easier to achieve, given that all changes are encapsulated in the integration platform, but it has an operational impact on all integration flows and systems related to the system that is replaced, triggering full regression tests with each change. It is the downside of moving the communication complexity from those systems into the integration broker.
Moving on to simplicity (3), it is quite obvious that the landscape overall became a bit more complex, giving Broker Architecture a one point lower score than it was with EDA. It is worth noting that this complexity is only superficial, as introducing a new system does not have to translate directly to higher complexity, as it is usually the number of interactions (operational coupling) that is the better measure. The key is the scale as Broker architecture will be used in more complex environments than EDA and Point-to-point, so it aligns with the growth of the business complexity partially moved to the integration broker. The simplicity of this architectural style supports time to market very well, as usually the integration flows are easier to develop with the right technology and a dedicated team of developers that specialize in such tasks.
The element that sets this architectural style apart from the previous styles is the capability to provide abstraction (3) of business systems contracts, data models, data access logic, credentials, etc., which, if used properly, can help lessen the workload on development teams, so they can focus on functional requirements and not try to solve communication riddles. This is directly tied to contract resilience (3) that is a lot easier to achieve as a lot of the changes to contracts with downstream systems can be encapsulated within the broker and will not impact the upstream applications. Lastly, with the integration brokers enabling protocol agnostic communication, there is more support for extensibility (3), as it is a lot easier to add new services and systems to the ecosystem if there is less adjustment needed from them to be integrated.
Operational analysis
Let’s now take a look at the operational characteristics of the Broker architecture, where things are a little more varied. Starting with testability (3), we can see that it is lower proportionally to simplicity. Introducing a new system that facilitates communication makes testing a bit more difficult. Yet, due to the fact that all orchestration, transformation and data access logic is contained within a single integration flow per use case, the effort increase needed to test communication end-to-end is not that significant, which means that there is a fairly low impact on time to market.
Moving on to characteristics describing how this architectural style operates under load of messages it will be less performant than Point-to-point or EDA, due to the fact that we are introducing latency in a form of an integration broker, which has its logic to execute, the overall performance (3) of the ecosystem will be impacted, which is a trade-off compared to the previous architectural styles. How severe this impact will differ between implementations is based on a number of factors, like, but not limited to:
- Deployment mode - cloud vs on premise,
- Runtime operations - self-hosted runtime vs managed services,
- Chosen technology - is it chosen to facilitate the right needs, like on-demand, event communication or batch transfers,
- Integration flow complexity - simple p2p flows, simple orchestration or complex BPM like orchestrations (which we do not advise),
The end result might noticeably differ from a qualitative analysis result as some of the factors may be improved in various ways. Which leads us to a very similar situation with scalability (4), as it will also be dependent on similar factors. Luckily since most modern integration brokers are built in a microservices architecture or as serverless functions, they have robust scalability features, including automatic horizontal scaling if needed. This, combined with good load balancing capabilities, enables the integration flows to be separately scalable, providing availability that easily matches requirements of all systems involved in the communication.
Looking further into operational characteristics we find observability (3) which does not really differ from EDA in regards to score, but with Broker Architecture it will provide a completely different set of observable data and metadata. Since nearly all data is supposed to pass through the integration broker it is a great place to gather intelligence as to where data is used, as well as how often it is requested or distributed, and what is the performance of business systems in terms of interoperability. The only downside is that since each integration flow is a dedicated workflow, that metadata per system will be distributed among many flows and needs to be aggregated and classified before it can be used. Tying it to auditability (4), when observability is properly managed, the integration broker becomes a valuable source of information about the ecosystem and often a source of truth on general IT operations and the consumption of data. These are very important aspects when there is a need for proof for root cause analysis (RCA) or audits. If combined with managed observability from business systems, aggregated to a single logging and monitoring, or an analytics platform, it can give a very wide overview of all processes.
Lastly we have security (4), which is scored considerably higher in the qualitative analysis. The sole fact that a mediator is used, limiting the access to other systems from any business application, is a big step towards securing the ecosystem. If a system and its data are compromised, that limits the access to that particular system, without giving any footholds to other systems that are bound with the breached one by communications. Combined with observability and real-time traffic and metadata analytics, Broker Architecture can enable anomaly detection, helping to automate and boost the speed of reaction to potential security breaches. If a breach is identified, the compromised system can be isolated swiftly by stopping specific integration flows that are inbound or outbound to that system, further limiting the damage.
Conclusions
Tesler's Law highlights the inevitable presence of complexity in any software system. Broker Architecture offers a pragmatic approach to managing this complexity by strategically shifting it away from individual applications and towards a dedicated integration layer. By centralizing communication logic, data transformation, and protocol handling within the broker, this approach reduces the cognitive load on application developers and simplifies the development and maintenance of individual systems. This not only enhances developer productivity but also improves business systems maintainability, reduces the risk of errors, and ultimately enables organizations to adapt more effectively to the ever-changing demands of the modern business landscape. While the introduction of a broker introduces a new layer of complexity, it is a managed complexity, allowing for better control, observability, and scalability of the overall system.