Dan Toomey discusses the complexity of the anatomy of integration architecture, taking into consideration how it changes over time. He explains the problems we are trying to solve using integration and the best practices to solve them.
Integrate 2018, June 4-6, etc.venues, London
Dan is an Enterprise Integration Consultant at Mexia. And he’s also an Azure MVP with over 15 years of experience in integrating large enterprise systems. So, let’s welcome Dan to talk about “Anatomy of an Enterprise Integration Architecture.” Over to you, Dan.
Dan: Sure. Okay. Thank you. Thank you for that. Thank you. It’s absolutely thrilling to be here, not only at this conference because it is after all the best integration conference of the year, but also, of course, to be standing on stage for the second time. So, thank you. Thank you very much, Saravana and BizTalk360. It’s quite an honor and a privilege. This presentation is a little bit different than ones that I normally give because typically, I’m talking about particular Microsoft technology or a service and doing some demos on it and it’s generally a technical presentation. But this one is a bit more in the architecture space and that’s because that’s where I’ve been working for most of the last year.
So, apparently, that’s me. The only thing I will say about this slide is that that image is by far the most interesting thing on the slide. So, rather than believe or anything else, I’ll tell you something interesting about myself that you won’t find by looking up on LinkedIn. Up until 18 years ago when I cross trained into IT, this is what I actually did for a living. So, I was a professional musician. I’m Juilliard graduate. And the last gig that I had was playing in the United States Air Force Band. So, if you’re wondering what the role of a trombonist is in the American military, well, let me tell you. If you have ever doubted the lethality of a trombone at close range, you simply have to ask the poor photographer who took that shot. So, it would be good if you could actually, but God rest his soul. He got a bit too close and, well, we couldn’t quite save Brian, but we did manage to save his camera and his photos. So, you might say that that was the best shot of his lifetime.
So, I’d like to give a couple of acknowledgments here. Lee Simpson is a Practice Lead at Mexio, where I work. He is probably one of the most forward thinkers in our organization. If there’s anything new that comes out, be it a technology or a way of thinking about architecture or solution delivery, Lee is always one of the first ones to try it and experiment with it and determine if it’s something useful. So, he kind of inspired me to do this talk and gave me a bit of direction on it. Richard, of course, you all know about Richard’s phenomenal contributions to the integration community, and you heard him speak a couple of hours ago. He was kind enough to preview my slides a couple of weeks ago and gave me some direction. So, thank you, Richard. And Mexia, the great thing about working at a company with a bunch of people who are all smarter than you is that you can learn from their collective wisdom and experience and quite a bit of the lessons that have been learned and their experiences made its way into this presentation. So, thank them for that.
Now, the topic “The Anatomy of an Integration Architecture” is exceptionally broad. And to be honest with you, when I submitted this talk about four months ago, I hadn’t quite yet determined how I was going to narrow the focus down to something small and could fit into 40 minutes. I since have and what I’ll tell you is that these two images give you a hint about what the theme is going to be about this presentation. Some of you may get it straight away, the rest of you, I’ll leave you to ponder it for just a minute while I introduce. But in the next couple of slides, you’ll know for sure what it is.
It’s not as common today to have as many organizations that have that one monolithic application, you know, the one app that does everything for their business. There might be a few out there, but typically, especially as integration specialists, we see something more like this, where there are multiple applications that make your business run. And this allows the company be able to get the best of breed and get applications to do exactly what they need to do. And we like that as integrators because it gives us work to do, because we’ve got to figure out how to make these systems talk to each other so that the business actually runs. Some businesses go to the other extreme where they don’t have just a few, but they have maybe dozens or even hundreds of applications. And, of course, the integration gets very complex.
This graph kind of shows the relationship between the number of applications and the complexity of integration. There’s also a cost dimension to that as well. So, one could argue that with a monolithic application, any change that you make, no matter how small, becomes expensive because you’ve got to actually test and package and redeploy that entire application all at once. So, the tradeoff of having the flexibility of multiple applications where you can change only one thing and just that one thing, of course, is the cost of the integration to manage that complexity. But there’s one more dimension as well. Oh, and somewhere there’ll be a sweet spot where you wanna live and that can vary from one organization to another.
There is one more dimension as well. And that is time. So, the thing about any architecture diagram is that it’s always a snapshot. It’s a point in time, whereas we know that architectures are living and breathing things and they change over the course of time. Applications become upgraded, they’re expanded for new features, some applications are dropped, others are put in. And the rate of change between those applications varies. So, some applications change very, very slowly, you know, just incremental improvements. Others may change much more dramatically as your business processes change. And then some change very, very quickly when…normally in your, like, experimental stage.
So, you can see that we can represent this as layers, layers of the rates of change. And now you understand what the meaning is of those two pictures. And that’s the theme of my presentation, what does this mean from an integration perspective, right? It’s all about enterprise agility and how can we use Microsoft tech to accommodate the different layers and the different speeds at which those layers move so we can reduce the friction there. So, let’s talk first about those layers and what they are. The concept of layers is not new. Gartner introduced this about eight years ago. How many of you have heard about the pace-layered application strategy? Okay. Good. A few of you. Okay. So, I’ll just talk briefly about what this is.
Gartner introduced this to address a growing concern that there was a conflict between business leaders and IT leaders and that the business leaders want their business to be flexible. They want to move and change as they need it and the systems, of course, to support that. So, to address, for example, market changes or competition or just to use technology to increase the efficiency. But the IT leaders tend to like things to stay the same. Once they get an application deployed and it’s working, they just want…they don’t want to touch it because they don’t want to risk being woken up at 2:00 a.m. in the morning with problems. But obviously, as Kent mentioned yesterday in his presentation, transformation does not occur while waiting in line. So, business has to be able to move. So, Gartner created this model to give some guidance and address how we can manage that.
So, if we analyze that a bit. The bottom layer is the systems of record. So, every business has this, at least one system of record, right? This is what represents your core capabilities and your core processes and also, your core data, the source of truth. So, it might be a CRM or an ERP. And the thing is about these systems, is that the capabilities they represent are common across all businesses in that particular industry. For example, a bank is a bank. A bank has to manage things like transactions and accounts and customers. And that’s not going to change from one bank to another. And that’s why a lot of the systems or records are really often vendor-packages. They’re created by somebody and then sold to a variety of organizations.
The next layer up is the systems of differentiation or uniqueness. And this represents the systems that support those processes that are somewhat unique to your particular business. So, for example, most banks offer loans to their customers, but the way that my bank processes loans might be different to the way your bank processes loans. And those business processes maybe aren’t supported out of the box by what comes out of their system of record. So, this is where you would build the systems that accommodate those unique things.
The top layer is the systems of innovation. And this is your sort of experimental stage. So, this is where you try to address new technologies and new ideas that haven’t yet been proven. So, as you’re building these things, obviously, they’re going to change fairly radically and they have to be addressed in a different way. They have to have that freedom to be able to move and experiment. And from the rate of change, the system of record, generally, they’re not going to change very rapidly, right? You might have incremental improvements to those things, but once you’ve invested in one of those systems, it’s going to stay there for a long time, and because your core processes don’t really change very much. What a bank did 20 years ago is pretty much what they’re doing today. Obviously, the way that we access those capabilities has changed, but the actual capabilities themselves are still there.
The business processes though they will change a little bit more swiftly. So, over time, the bank will find more efficient ways to do things and better ways to engage with their customers. But the innovation systems, that’s very experimental. That could change quite quickly as you try things out, that doesn’t work, try something else and move along that way. So, just looking at the aspects across those layers. From the business process perspective, at the system of record level, they’re going to be very, very structured and repeatable processes. But as you move up that stack, they’re going to become more configurable and autonomous and then gradually it gets too ad-hoc and very dynamic in that. And the same could be really said for the data as well. The data is going to be very structured with your core systems because it usually has to be very high-quality and be prepared for auditing and is very much internal data. But as you move up the stack, you’re going to start finding more reliance on external data and the data may not be quite as well structured.
Governance is going to be extremely tight on your systems of record naturally because you have to protect that. You have to protect your master data, you have to protect your processes and understand that any change that you make there is going to have fairly large impact on everything else that’s relying on it. But as you move up the stack, especially when you get to the innovation layer, you have to have the freedom to try things. The same thing with the business engagement, it will be very formal at the bottom, but maybe a bit more relaxed as you get to the top layer. And as far as timelines, these are Gartner’s numbers. You can see systems of record take…are there for a number of years, right? And for planning on that, but the business processes are also probably measured in years as well, just not necessarily as many, but on the innovation, you could change on a monthly basis.
So, now that we understand what those layers are, what does this mean from an integration perspective and how can we build our APIs and our integration to help adapt to the changes in these layers? So, let’s start at the bottom, the systems of record. Typically, these are packaged applications, right? So, they may be made up of services, but they’re assembled together and deployed together and hopefully, they have decent APIs. So, sometimes those APIs aren’t very convenient to use. So, typically, we build a layer of APIs over it that can provide that abstraction for it. So, this could address things like making the protocols more friendly, like establishing REST interfaces, for example, and also adapting the data model that the system of record has to maybe conform more to the logical data model that you have for your business, to make those services a bit more consumable, particularly, by the systems and the layers above.
So, for example, in the systems of differentiation, those will typically compose their services from those APIs below. And they will consume those APIs and also APIs maybe from external systems as well. And when you get up to the top layer, the system of innovation, that’s going to consume APIs from both levels as well as external. Now, the glue, the connective tissue that makes all of this happen, of course, is a message bus because that provides you the ability to have that asynchronous and decoupled communication, not only between the layers but between the services themselves in the layers. So, just looking at a couple of aspects, again, they’re very, very similar as they were across the application layers. But the rate of change is going to be quite slow for the APIs that represent your system of record because obviously, there’s a lot of dependency on that as well. And those APIs are designed to be reusable because they are the building blocks for the rest of the applications and integrations that you build above it.
Change control is going to be very, very tight for the low ones, but it could be more relaxed for the ones that you’re developing quickly in the innovation layer. And just from a testing perspective, you’re probably…it’s worthwhile in investing a lot of time to have automated regression tests and a really solid CICD pipeline and stuff for the APIs that represent your systems of record and for your business processes. But when you’re developing APIs for the innovation layer, you may not want to spend that time yet. Not until you’ve actually proven that whatever it is you’re building there is actually going to work. So, what I wanna do now is look at these various layers and look at the toolkit that we have of all the Microsoft integration services that we have and see how you might map these two different layers.
Now, this is all very subjective. I’m sure…you know, you’re welcome to disagree with me and I’m certainly not going to be able to cover every service, but it’s just an idea to start us thinking. So, with your systems of record, they may be Microsoft systems like Dynamics CRM or even SQL Server that could be a system of record or it could be some other product like JD Edwards or SAP. But you’re going to need something to build that abstraction layer of APIs over the top of it. So, one product that would be excellent for that, of course, is BizTalk Server. So, typically, these systems of record, they often live on prem. So, it makes sense that with BizTalk Server if you deploy that on prem, it comes out of the box with the ability…with a number of connectors. And if you’re lucky enough to have an out of the box connector for your system of record then you’re laughing because you don’t have to worry about how you’re going to code that connectivity.
But the other thing BizTalk can do is it can provide you the ability to do message translation and protocol translation. The other most obvious choice would be API management too because this is exactly what that’s intended to do. The only difference here is that API management for right now, anyway needs to be deployed in the Cloud. So, if iyou’re going to use tt, you’re going to have to have a VNET Integration. And as Wagner mentioned, you’ll need the Premium version of API management for that. But it is a bit of an investment, but it is very, very worthwhile because not only can API management do that protocol translation and even message translation for you, but it comes out of the box with all of those policies that enable you to provide the security, the access control like rate limiting and throttling, caching analytics, just a whole lot of features that really make that worthwhile. And when you deploy an API management, you don’t have to expose it to the world. You can if you want to, but you can expose it just internally to be used by your own company. So, it’s very secure solution.
Another option that you probably haven’t heard talked about much at this conference is Service Fabric. So, if your organization is building microservices architectures, then Service Fabric is an excellent hosting platform for that. It’s very, very robust and designed to host microservices. Actually, it can host containers, it can host pretty much anything that you want to. It has a number of great features for managing that from a deployment perspective. I call it [inaudible 00:17:18] for DevOps. And Service Fabric can be…you can host it on prem or you can host it in the Cloud or both, actually. How many people here know that a Service Fabric cluster can actually span on prem and the Cloud? And in fact, multiple cloud providers. How many of you knew that? There’s an excellent blog post that we did on Mexia last year. It’s an article…I’d probably have to search for it, but it’s about how to stay up when your cloud provider goes down. Because we actually did an experiment, we had service fabric on prem and in Azure and in AWS, I believe, the same cluster.
So, if anyone of those services went down, as long as the other two are up, your business was still running. The only thing about Service Fabric I should mention is that, from an application perspective, you get nothing out of the box. You have to write all of your application code. So, that’s something to consider. And if you’re going to be writing all your application code, you could argue that, “Well, I could just also hand roll my own web services or web APIs and host them in IAS, on prem. Or if you want to do a hybrid approach, you can use hybrid connections or VNET Integration to host them in-app service in Azure if you want to. So, that is always a possibility.
Now, I’ve provided this table with a bunch of notes and comparisons and suggestions about when you might use which of these technologies. I’m not going to go through it in detail right now, but these slides will be published. So, you’ll be able to reference this later. But just from a high level, the things to consider, are you able to host in the Cloud or you’re confined to hosting on prem? And do you wanna write your own custom code or do you want to use out of the box adapters? And also, what fits in best with your existing architecture? Are you building microservices or are you thinking about investing in an all for one integration platform like BizTalk Server which can actually serve as multiple layers of your organization?
So, looking at the next level, the systems of differentiation. Again, you have to consider BizTalk Server here. And BizTalk can be hosted both on prem and you can host it as IAS in the cloud too. And we know at least one client that does that because they want that connectivity. It runs their organization for a distributed network of health systems and BizTalk is still the best answer for doing HL7. So, in that case, the IAS actually works very well for them. So, BizTalk, not only can it support those long-running automated processes for you, but also gives you a business rules engine out of the box where you can actually abstract some of those more volatile business logic that might have to change. and you don’t want to have to do deployment or full deployment every time you change it.
It also gives you Bam, which is that end-to-end tracking of your business process. And I always feel that Bam is one of the most underutilized features in BizTalk. Amost every project that I’ve been on in BizTalk, it’s kind of treated as an afterthought and it’s typically used more for instrumentation than anything else, but that’s not really what it’s intended for. Anyway, there could be a whole talk about Bam. If you’re connecting out to external systems, particularly SaaS services, then you really need to look at Logic Apps, right? Because not only can Logic Apps run those durable workflows like BizTalk can, but you get out of the box the 200 plus connectors that enable you to easily establish at plumbing out to those services for you. So, Logic Apps is extremely powerful.
And Azure functions, of course, has always given you the ability to run arbitrary code, usually stateless code. So, you could call that, do some processing from a logic gap or from anything really. But now, more recently you have Durable Azure functions, which can actually run those stateful workflows for you as well. And there’s an excellent blog post from Mexia, a very recent one from my colleague, Paco de la Cruz, who’s just become an MVP. And it talks about…it does a comparison between Durable Azure functions and Logic Apps and when you might use one over the other. So, I’d highly recommend having a look at that. Again, Service Fabric, you can build anything on Service Fabric if you want to. So, if you’re doing microservices that make sense, it hosts in the Cloud or on prem and if you are going to engage with customers in this layer.
So, web apps and mobile apps aren’t specifically integration technologies in themselves, but you also…you can write integration logic in there. And with the use of hybrid connections and VNET Integration that can connect to your backend systems as well. So, again, I’ve provided a table of some things that you can probably look at in more detail later. But it’s pretty much the same general kinds of things about whether you’re hosting in the Cloud or on prem, whether you wanna write your own code and also where you’re comfortable developing that code. For example, if you really want to live inside a visual studio, that’s an argument for Azure functions. But if you’re comfortable also using a browser-based version of development, like for Logic Apps, then those kind of decisions can affect what you choose.
The systems of innovation really, almost anything can go in there because any technology you use could be innovative in the way that you use it. But some things that I think that really stand out there, Microsoft flow for sure is an excellent candidate in here because you can develop things very quickly and it doesn’t take a Dev team to do it. A business user who’s reasonably tech-savvy can actually find a way to automate some of their own menial tasks. And the good thing is that if that integration and that flow winds up being very useful, you can then graduate that to a Logic App and establish it as part of your enterprise-level integration. And this is systems of differentiation.
Power Apps is great for developing apps that you can use in-house on devices. And if you’re looking for new ways to engage with your customers, then cognitive services and bot services allow you ways to interact with them in a more human way. For example, if you’re applying for a loan, instead of giving them a great big static webpage with a million different fields that they have to fill out, you might use a bot service to create an interview-based kind of interaction with them where you ask them questions. And the great thing about that is that as they answer the questions, you can direct their flow and then just start asking only the questions that are relevant for what they’re doing rather than collecting a whole bunch of information, half of which might not be applicable in that case.
Also from an insights point of view, there’s a number of services that we can use for that as well. So, if for data analytics, if you want to do some kind of predictive analysis for your company, you can use machine language. If you can get somebody who understands the data science behind it, to build those models, it’s a great tool for doing that. And, of course, Power BI is a way to expose that information in a very consumable fashion and you can build fantastic dashboards very, very quickly. So, again, I’ve given this table. In this particular case, it isn’t really so much comparing apples and [inaudible 00:24:46] because each one of these products does something completely different. So, it’s really up to what it is you wanna do more about than making big decisions about one over the other.
And the last area is, of course, the message bus. And I don’t feel, after the presentations that we had from Dan and Clemens and also Steve Young, I probably don’t need to concentrate a lot on this, particularly in telling you the difference between venting and messaging and the tools that you’d use in those cases. But Microsoft gives us a fantastic capability for building that ability to do messaging, particularly asynchronous messaging which gives you the most extensibility and flexibility. I’ve also included the relays in here because that establishes that hybrid connectivity that you want between…if you’re expanding between the Cloud and on prem. And, of course, I have to include BizTalk Server because what is BizTalk really, that at the heart of it is a very sophisticated and robust messaging engine.
So, if you’re investing in BizTalk Server, then you can address a lot of these concerns through multiple layers. So, even though it’s expensive, it gives you an awful lot of value in that extent. So there’s the full outlay of the services and how I’ve mapped them. And as I said, it’s very subjective. It doesn’t include everything and I’m very happy to debate with any of you about the choices I’ve made there. So, you can look at this in a number of different ways.
I’ve included that table also as well for the messaging technologies. So, for the last part of my presentation, I just wanna talk through a couple of best practices in here and some things to consider about when we’re building integration about how we can do this better. So, the first one is obvious and probably we all do this anyway. When you’re tasked with building an integration, think about the applications or the systems that are involved and think about how it’s going to be used and what implications that has. For example, if it’s going to be a workflow that’s mission-critical for your organization, chances are you’re not going to build it in Microsoft flow. You’re probably going to build it in Logic Apps or BizTalk or something that’s obviously more enterprise grade.
You need to think about the security that’s required for it and the data that’s being used by it and how you’re going to handle things like encryption and protection and access. And also, where does it sit in these layers and how quickly is it going to change? Because are you going to invest the time in building lots and lots of automated tests in a very, very robust CICD pipeline? Or if you’re just experimenting with something as a proof of concept, maybe you’re just going to get down to just building it.
This image from Gartner really says it all about the systems of record. It is a foundational layer. So, it’s important that that’s solid, right? So, the APIs that you build here, particularly the ones that you use in the abstraction layer have to be very, very well defined and they need to be stable because that’s the building blocks for most of the other integrations that you’re going to do. I also recommend that you implement your security and your data validation in those APIs, at that level, because you’re really trying to protect those systems that are critical to your organization. Don’t assume that the upstream services are going to do that for you because it could be consumed by a proof of concept and something that’s innovative and they’re not going to take the time necessarily to do all that validation with security.
If you protect your APIs at the system of record level, then you can mitigate any of the risk associated with those things that maybe don’t have the level of governance over it to what at the lower layers. Also, this is a tip from Gartner, is limiting the customization within your system of record. So, I can give you a case that we have right now of a client who’s using a core banking system that is at least 10 years old and they’re at least that many versions behind because they decided to customize their system very much.
Now, the vendor that put it in was happy to do all that customization for them because it may have been very dependent on that vendor to manage those changes. But over the years, as this progressed, they’ve been unable to actually keep up with the versions because they were worried that if they upgraded, they would lose all of that customization. Now, they’re at a point where they wanna modernize and actually create an API layer for the bank and they can’t do it without an upgrade. So, they have this very, very expensive proposition now of doing all the analysis to work out what was the customization we did in the system? And hope that it was documented somewhere. And then how they’re going to reimplement that.
So, they should have done the customization instead in the integration layer, which would typically be the system of differentiation. And that’s what that’s meant. And that’s what we should try to do, is do your customization there. Don’t create that dependency on your systems and get locked into a specific version. So, canonical data model is a point of contention. There are arguments for both sides. We have clients that say, “Don’t build canonical models. Just build the API for the system the way it is. Let the consumers worry about how they’re going to consume it. If they can’t, they can build their own adapter.” So, that’s one possible approach.
Basically, you’re pushing that task onto the ones that are consuming those APIs. But I find it works really well the other way too, where if you have a good, properly defined business data model, a logical data model for your business, then you can have this consistent way that you can build APIs and everybody knows how to consume them. And if they can’t consume, then they adapt to that. But at least you’re exposing everything in a consistent way.
It does require having a good logical data model though, and typically, you need an information architect for that. And unfortunately, not a lot of organizations really invest in that, which is a shame. By the way, I wanted to say this image is from this mapping tool and I highly recommend it, api.map.com. This was created by a colleague of mine and what it allows you to do is upload schemas, either XXDs or Jason’s schemas. And you can actually do your mapping with this online tool and it actually uses machine language to help give you some intuition on how to do the mapping, gives you suggestions.
Now, at the moment it can’t export code. That’s a feature he’s going to try to add later, but what you can do is once you build the map, you can export it as an image like this or you can export it as an Excel spreadsheet. And it’s fantastic documentation to give your Dev team and tell them, “This is what I need you to build.” And at the same time be able to give it to your business analysts and business owners saying, “This is how we’re mapping the messages.” So, I highly recommend that tool. Of course, when we’re talking about connecting services, and this is integration 101 for us, I guess, but try to be as loosely coupled as you can. Sometimes it’s impossible. Sometimes you have to use synchronous request-response, but when you can, Pub/Sub is always the best kind of model because it is the least coupled with your systems and it is the most extensible.
And that can really reduce the friction between those layers that are moving at different rates between those applications. And I said…I cautioned against vertical dependencies because typically, if you are making a change that requires some other system or its APIs to change, often you are not the owner of that other one. So, then you get into that argument about who’s responsible for the change and making it. So, in the rare case, when you do actually own that entire vertical slice, then it’s not as much of an issue.
And the last thing I’m going to say is basically try to promote a culture of innovation and experimentation. Now, we as integrators, we often don’t sit in a position in the company where we can feel that we can influence that very much with…usually is a culture that comes down from the top. But what we can do as integrators, consider what you’ve seen over the last couple of days here, all of the demos that you’ve seen of these Microsoft products and how it enables you to build some fantastic capabilities in a very short amount of time. If you take that knowledge and that ability, if you build demos and show your decision makers in your organization, what Microsoft allows you to do, then you can basically teach them how to integrate at the speed of business, which was a clear message from Jim Harrer read last year’s keynote.
The other thing that Jim said is that you…sorry, you, as in all of you, have the ability to improve your business and you don’t need permission to do that. So, there’s a bit of a call to action out to you. What can you take away from this conference and what you’ve learned to go back to your business and show your business what Microsoft can do and how…and the possibilities that it can enable for your business. So, I think I’m…yeah,ahead of schedule. So, that’s about all I have. So, I’ll be around the rest of the day, if you wanna discuss anything. And Mexia, we don’t have international clients, but we certainly welcome conversations from anywhere around the world. So, feel free to get in touch with us. If you’re ever going to think about visiting Australia, please look us up. A number of people here and MVPs can talk about the visits they’ve had and the great experiences. I promise if you come to Australia, there will be no trombones. And do check out our blog too because there’s not only a lot of great blog posts there, but there’s also articles and white papers and a wealth of information. So, thank you very much.
Fill the form below to get all the presentations delivered as a single zip file in your mailbox.
byJon Fancey & Matt Farmer
byMicrosoft Integration Team