Integrate 2018

June 4-6, London

Summary

Kevin Lam explains the architecture of Logic Apps behind Flows, and gives a pre-private preview of the Integrated Service Environments (ISE) and its architecture.

Logic Apps Deep Dive

Integrate 2018, June 4-6, etc.venues, London

Length34:25

Video Transcript

Hello everyone. So I am Omar, and I am part of the Unity [SP] Team at Core Limited. I am currently working on a project called Atomic Scope. So yesterday we saw a session on interaction for the Logic Apps. And today we’re going to have a deep dive session on the same topic. So to present this, I would like to invite Mr. Kevin Lam, who is the principal program manager at Microsoft. And also Matt Farmer, who’s also a senior program manager at Microsoft. Thank you.

Kevin: So this is the Logic Apps Deep Dive. We’ll go into some of the bowels of how Logic Apps works in the background so you can better understand what happens when you have your workflows run in the Logic Apps environment. So first we have our Logic Apps Designer. Our designer is written in TypeScript/React and is hosted in iFrame in the portal and it’s the same designer that’s actually running in Visual Studio. So it’s an iFrame in Visual Studio as well, hosting the same designer.

Our designer uses OpenAPI Swagger to render the inputs and outputs. So we use that Swagger to understand how to generate the cards as well as the properties for each of those APIs of operations that you have, and then render the tokens for the outputs that you have from each of the actions that get executed. That designer then interprets the design objects that you put onto the canvas and then generates JSON. Right? So when you go to code view, for those of you who are familiar with Logic Apps, you go to code view, you see the JSON DSL. The background, as you change that, you switch back, the designer will render as long as you’ve typed in your correct JSON, the designer represents that JSON. Okay, pretty straightforward.

Something to think about for your logic app. So, you know, Logic Apps is workflow as a service in Azure, but it may not behave as you may think of regular workflow. So typically people think of workflows, you know, first do step A, then step B, then step C, and you have conditions and loops, etc., and it’s a forward-chaining type of process. Logic Apps is not quite that way, we just make it seem that way. So Logic Apps is a job scheduler with a JSON-based DSL describing a dependency graph of actions. Right, so this is an inverse-directed graph. So if you look in the background, what we’ve done in the designer is make it feel like you’re doing step A, step B, then step C. What we’re really doing is an inverse dependency on the previous step. So if you look at the code view, you’ll see that there’s a RunAfter step. And it says RunAfter step A, right? Step B runs after step A. So then when you create your logic app, you have the sequence of actions.

But in reality, the language allows you to have no dependencies and then everything runs in parallel. Right, so every action is essentially a task that runs in the background. That allows us to be a very highly parallelizable engine so that your workflows can actually run in parallel. So we’ll see some of that in a ForEach statement as well as in your action executions. So keep that in mind as you write your logic apps, it’s a dependency graph, not a step one and step two process.

So what does that look like? So if I have this logic app that has a service bus trigger, that in that service bus message comes a collection. There’s a message with an array in it. I will ForEach across that collection, then for each item in that collection I will call and do an insert for a SQL, and then, at the end, complete the message in Service Bus to say, “I’ve consumed the message, it’s safe to delete it.” So what does that look like in the background? So you save it and get saved in this workflow definition into our backend. That backend will then interpret the logic app. And the first thing it’s going to say is, “Ah, here’s the trigger task. I have to figure out for this logic app that’s been deployed, I’m going to go ahead and monitor the service bus for new messages. So that’s the first task it does. So it’s asking the service bus, “Hey, is there a new message?” Just for you guys to know, our service bus trigger has long polling, so if you never need to poll on it, don’t poll on it for any more than 30 seconds because we long poll for 30 seconds.

So it sits there and waits, and then once a message comes into Service Bus, so On new message, it will go ahead and tell our Workflow Orchestrator, says, “Hey, for this logic app definition, there’s a new instance that should be created. And please go ahead and start to instantiate the jobs that are related to this instance.” So the Workflow Orchestrator looks at the definition and says, “Okay, the first action is a ForEach.” ForEach then says, “Okay, in this ForEach I have a collection.” So our ForEach runs in parallel. So we talked about in the highly parallelized estate. ForEach, if you’re used to in your coding, typically a ForEach is an iterator that runs sequentially. In Logic Apps it actually runs in batches of 20, and you can configure that in the portal. We’ll talk about that in the next talk. So it’ll go ahead and take in batches of 20 and call the SQL connector to do an insert, and will do that in parallel until it’s gone through the whole collection. And so then it will wait, and after it’s done, it tells the Workflow Orchestrator, says, “Okay, my ForEach is done, go ahead and do whatever the next step is. It looks into its definition, says now “Complete Message.” So then it will send the message to Service Bus to say, “Hey, please complete this message.” And then a workflow complete status gets put into the engine. So that’s how the background.

So note that this is all tasks in the background. There’s no compiled code. This doesn’t get compiled into code. We look at the interpreter, your JSON and then create different tasks in our backend to get executed. I won’t take credit for this cool graphic. So things to note, that we have a highly resilient task processor. There’s no active thread management, so you don’t have to think about your workflow being, you know, compiled into assembly and then it’s on some VM running threads for all this parallel work. It’s not. These are distinct jobs that get distributed across a lot of nodes that are running in our backend. And since there’s no active thread management, your runs can exist in parallel across all these machines. It’s not limited to a single node in our backend. So that creates better task resiliency. So even if a node, which happens, can happen at anytime, goes, “Poof!” it’s okay because all your other ones are still running and we can handle transient failures because our engine will go ahead and monitor the task that’s been done if it’s not complete, then we’ll go ahead and reschedule that task to get completed. So we will continue to try really hard to make sure that that happens.

We have built-in retry policies so that if there’s any transient failures, you know, as it’s talking to its endpoints, this is all…you know, 90% of your work is API based, talking to different endpoints, and things happen out there. Network issues or blips or, you know, trying to do some auth and, you know, a lot of moving parts in there. So there are a lot of those things that need to be retried. We will handle that for you. There’s a default retry policy that we have and we will handle that.

So I talked about if the tasks doesn’t respond, the Workflow Orchestrator will go ahead and assign a new task. And we have at least once guaranteed execution. In the cloud world, for consistency, you wanna think about eventual consistency, item potency, and at least once execution. Those things go together in our new cloud world. For those who were used to coding the 90s and DTCs on servers, don’t do that anymore. But at least once guaranteed, so we will commit that we will do it. Sometimes a response gets lost, or, you know, that machine may have died just as it was completing its job, but we will guarantee that it will execute at least once.

So as we look at the component architecture in the background, we have…I’ll keep this a bit vague…but four general components. We have our Logic Apps RP, which is a Resource Provider. Essentially our frontend that’s handling all the requests. So every time you create a job or you do a management request, it goes to our Logic Apps RP. So that reads the workflow definition and breaks it down into component tasks with dependencies and then puts that into our storage which then the backend will go process.

The Logic Apps Runtime is a distributed compute that will go ahead and coordinate those tasks that have been broken down from your logic app. Then we have a Connection Manager that goes ahead and manages the connection configuration, token refreshment, and credentials that you have in your API connections. And then finally the connector runtime, that hosts our API abstractions to all the APIs that you have and hosts sometimes are codeful, sometimes they’re codeless, and then manages that abstraction for you. Right, so that’s our… So remember this picture because I will come back to it in a moment.

Integration Service Environments. So this is really the first time that we’re talking about the integration service environments. So I mentioned yesterday that we’re releasing a new environment that allows you to do some great stuff and I’ll talk about that now. So integration service environments essentially gives you dedicated compute and isolated storage. So we have, in our regions around the world, we have stamps and those stamps execute your workload. So essentially, you’re getting an environment to run your logic apps and only your logic apps. It will provide you, since we have that set of compute and storage for you, then we can now take that set of nodes and integrate it into your VNET. So now with the VNET connectivity, you no longer need to talk to on-premise systems through the on-premises data gateway. You can talk directly if you have an express route to your endpoints. That will also give you private static outbound IPs. We have static IPs in Logic Apps today, but it’s for our service, so that when you look at that IP, anybody who deploys a logic app, it will be executing from those IPs. But with an integrated service environment, you get your own set of IPs. So you can be certain that it’s your logic app that’s actually behind that IP address.

And custom inbound domain names. So, you know, for example, the request trigger, if you look at the request trigger today, it has this gnarly URI with our domain name in the front. When you create one of these logic apps in the ISE, it will still have that gnarly name slightly different. But now you can go ahead and put your own domain name in front of it, where you couldn’t before, which then creates some interesting scenarios for things like putting traffic manager and other things in front of as well.

And finally flat cost. So today in Logic Apps, you get per action billing and that’s great. You know, if you want to only pay for what you use and it deals with peaks etc., but, you know, when your CIO says, “Okay, how much is this going to cost me?” You say, “I don’t know. It depends on how much we’re going to do.” Which is sometimes great but which sometimes is a little frustrating, and some people want that predictable cost. So we’ll have a flat rate so you can execute…you go ahead and party in that integration service environment and you’ll get a flat cost every month for that environment.

So what does that architecture look like? So had this picture before of what one of our stamps looks like in a region. So when you go ahead and create an integration service environment, we’ll actually take the runtime and the connector runtime and host that into your own private environment. So it will be treated almost like a new region, and I’ll show you that in a minute. And then that environment can get…if you have an express route to your on-premises, then you can connect that…integrate it into your VNET to be able to talk directly into your on-premises resources, then you have control over that VNET.

Pause there, people take pictures. Okay. So deployment model. So when you create an integration service environment, we’ll give you the ability to do 50 million action executions per month. It will include a standard integration account and an enterprise connector. Enterprise connectors today are the SAP connector and MQ connector and there’ll be more coming. They’re not billed at enterprise connector prices yet until they become GA, but they will be soon. And then VNET connectivity. So all that just comes with the base unit. And if you want more processing through the month, then you can buy extra processing units and you get an extra 50 million processing units on top of that. Wanna see it?

[00:14:01]
[music]
[00:14:17]

Okay, that was enough.

So in integration service environments, let’s go ahead and…look, this is real. I didn’t make it up. We have our engineers here in the background that have been working many nights to make sure that this is available for you guys. So we have the ability to create an integration service environment. You don’t, I do. Not yet. Everybody is looking for it, “How do I do that?” My network, yes there’s my network. So you can create an integration service environment, it’s going to be very simple. This is, you know, we’re still in pre-private preview, so you know, hopefully, demo gods are with me. But you can see it’s very simple. You put in the name for your integration service environment, your sub, your resource group, and the location where you want that stamp to live. The one piece of data that’s not here yet, we’re going to add soon, is the VNET information so then you can go ahead and inject it into your VNET. Okay, but this takes a little while because you can imagine conjuring up a bunch of infrastructure for a stamp that’s dedicated to you will take a little while in the background to go do that. So instead of going ahead and creating that live, which will take 20 minutes or so, I already created one.

So here’s an integration service environment. Again, this is a little raw, but I just wanna show you that it’s actually real. What we’ll have here, imagine, a bunch of meters and diagnostics so you can see what’s happening in there and then it’ll show you the list of workflows and connections, etc., that’s in there. But what I’m going to do is I’m going to create a logic app that runs in this environment. Okay. So I’m going to go ahead and create with my mouse a new Logic App. And this is my ISELogic. Okay, let’s pin this to the dashboard. So the interesting part is you see the location here, so instead of putting into Brazil South, now notice up top here, out of these regions, we have a set of integration service environments that you can deploy to. So I’m going to deploy to my IntegrationISE. And so now my logic app will be deployed into that integration service environment. Go ahead and create it.

Okay, so now it’s cooking and it’s going to deploy it to that YourStamp. You can do it. Good intro music. Hey, there we go, okay. [inaudible 00:17:34] Distract you a little bit. Don’t look at the stuff in the background. Okay, so this is the same experience that you get before for creating a logic app. It will feel no different than what you’ve done before. So I can go create a logic app. We saw that demo that Derrick did yesterday for creating a simple logic app. Network don’t fail me now. Here we go. So I can create a request endpoint. And so I’m going to do…change the method to GET, in case you guys didn’t know that, you can actually change the method that you want for this. A little secret there for you. And I’m going to go ahead and call MSN Weather just like Derrick did. Get the forecast for today, and then create… Let’s see this back home, 98052. And finally, add a response. And the weather is…temperature high…units, whoops. Okay, so that’s it. So I go ahead and save that and now we have…you know, I just created a new API that goes gets the high weather for today. And it’s running in my environment. Let’s see if that works.

Well, before I do that, I need to get the magic URL. Get the magic URL, that’s there, now we’ll go to Postman. Where is my Postman? There’s my Postman. I will go ahead and put there and I’m about to be good. And we’ll go ahead and…68 degrees in Seattle. That’s it. So you know I created the same logic app you could in the public environment in your integrated service environment, and it works. Yay!

So all that’s coming soon. So now we’re going to go into a private preview with that. So if you guys are interested, come talk to me after the talk and we can talk about the integration service environment. And we’re looking more towards the end of the year for some of this stuff. Question.

Man 1: How much?

Kevin: How much? I’ll talk to you later about it. Where is Shawn? Is Shawn here? Okay, let’s get back to the slides. And I think that’s it. We’re going to transfer over. Let’s get back to display mode. I think this is all yours now. Do you need intro music?

Matt: No, because I think we’re going to have to pay royalties to Queen and David Bowie anyway now. Good morning everybody. This event has the most awesome community. I’m just going to call out and on Twitter and stuff like that because I sat down while Kev was presenting, and already just in 20 minutes, there’s conspiracy theories going on about why I’ve disappeared. I’d just like to say, I’m fine. Parallel with world events, keeping up with the US. Anyway, I’m going to talk quickly about custom connectors. And you would have got a flavor of this yesterday when John was doing some of the enterprise integration whiteboard. I just want to talk about it a bit more because it’s a really important component of building integrations with the Logic Apps and can really benefit and make it easier for you and your colleagues and the people within your business to build up these processes.

So you’re probably familiar now, we have lots and lots of connectors in Logic Apps. And you should be looking there first if you’re trying to connect to SaaS services and things like that to make sure that you’re using what we already have. And just to give you a flavor of how proud we are with the number of connectors, I think this is [inaudible 00:22:03]. But this is the slide we’d use the last year, it’s not as pretty as our current slides. But there’s lots and lots of connectors on here, and we’ve got lots more now.

So please go and have a look there, we’re adding them all the time. But there are certain scenarios where you’re going to need to be able to create your own, either you’re trying to connect to a SaaS service that we don’t have a connector for yet or, more likely, you’re trying to connect to a service that’s within your organization. So an API for a system that you’re already using and you want to be able to surface that within logic apps. So that is where custom connectors come in. Any REST API, or actually, SOAP API, very easy to create them from. It’s a simple creation wizard. I’m going to show you that now. Most of this is going to be a demo so I’m going to put some…together for you. It’s a simple designer experience and you can tailor how it looks and we’ll show you that.

So what happens if you want to connect to your own stuff? Let’s go through and do a little demo. So the first things first, we’re going to start with an API we already have. API we already have…whoops, if I go into the right thing. Now, the API we’re going to use today is sitting in API management service. Just to make it absolutely clear, you do not have to do that. You can create a custom connector from an API endpoint any way you want, as long as you can reach it, and we can use the on-premise data gateway or virtual networks to get there. But I am using one that I’ve created in API management just to prove that you can. So here is our Big Conference API. The API management user interface has sped up recently. It’s so quick, it’s fantastic. So we’re going to create… This is a conference API so let’s imagine for a second that we are an organization that does conferences and we wanna take that data and use it elsewhere within some of our business processes, connect to SaaS systems, things like that. Wouldn’t it be great if we could take that API and reuse that in a logic app so we don’t worry about the abstraction? Great.

So let’s go and build ourselves a new custom connector based upon it. So if I search for logic apps, custom connector. There we go. We’re going to go and put one together. And we’re going to call it MattConference, just so I can find it. Okay, let’s do that and pin it to the dashboard so I don’t lose it. Off we go, won’t take long to create. So what we’re going to do in a second we’re going to take the Swagger file that comes out of Azure API management that’s already created for us and we’re going to use that to create our custom connector. Come on deployment, there we go. And here’s what the other experience looks like. It’s pretty simple. It’s just step-by-step. I’m doing one completely from scratch now, you can do this too. When you get back to the office, try one with an API you’ve already got. So as you saw yesterday, you can use SOAP or you can use REST. We’re just going to use REST that comes out of the APIM Developer Portal. I’m going to go and connect to the desktop because I’ve created earlier, if I can find it.

Man 2: It’s hard to read from the [inaudible 00:25:32]

Matt: I’m sorry?

Man 2: It’s hard to read.

Matt: Oh, yeah, yeah. Okay. Now, obviously, this is no problem to zoom in. Oh, hang on. It was running so smoothly up till then. Okay, let’s just give it a go. How’s it like that?

Man 2: [inaudible 00:25:53]

Matt: Obviously increasing the screen size without practicing it is not likely to cause any problems whatsoever in a demo like this, right? That never happens to us. Just feel the tension with me, all right, and if it all goes wrong then we’re coming after you. All right? So let’s go to find our desktop, and there we go, there’s our Swagger file, phew. Okay. We’ve got our Swagger file and we’re also going to use an icon because it’s nice if you can…you know, you’ve got things people are reusing, make it fun. So we’re going to upload an icon here. We’ve got a little robot because that’s what you think of when you think of conferences and that’s not the first image I found on the, you know, Microsoft site for these things.

Right. Anybody have a favorite hex code for colors they wanna call out? Red, that’s not a hex code, I’m sorry sir. This is a technical conference and I… Apologies. Anybody? Anybody?

Man 3: FF0000

Matt: FF0000, thank you, Paul. Your house must be interestingly decorated with that as your favorite color scheme. Right, so, but thank you for that. So we’re going to create our API based upon it. You’ll see that it’s pointing to the API management instance and now we just have to set up security. So because in my Swagger file I’ve defined…already this is an API protected by an API key, it’s already set this up for me. But I can go in and completely change my authentication scheme. And so if you want to support…so things like if you’re using SaaS services, quite often they’ll be OAuth, you can go into OAuth and then choose the type of OAuth you’re actually going to use to connect to it. So you can use Azure Active Directory, or Fitbit it would appear, and various other things like that to connect, Spotify for example. But we’re going to go back and use API key and there we just set up our definition. And here is where you can also tailor the API. So if you imagine you’ve got a bunch of internal systems with APIs that vary in complexity and maybe have very complex operations that are rarely used, you don’t have to project those out through this connector. You can go in and add documentation, you can remove operations and you can also set visibility.

So for example, if you have some that are running for expert users, you can configure them like that. So we’re going to just use this speaker operation to find out who’s speaking at the conference. You’ll notice we haven’t put a description, so we’re going to put “Get the speakers” in there. And you could go and do that for all these operations to make them more friendly to use. The whole point of using the custom connector is you’re allowing people who perhaps don’t know this API that well to use it within logic apps not have to worry about the actual implementation details of the API. As you’ll see in a minute, they could just do an ACP call if you want. The whole point is you’re making their life easier. So, there we go. There are some simple configurations [inaudible 00:29:18] through things like the query parameters and header configuration from the Swagger file. And now if I just go and save it, we update the connector, and it’s all done. It’s ready to go. So let’s close that and now go and test it out.

So here’s my logic app. We’re just going to go and test the conference connector. Right, we’re going to use a template. HTTP Request-Response. This may be obvious to a lot of you but some people don’t realize is actually Logic Apps is one of the quickest and easiest ways to build an API that does HTTP Request-Response. Just use this template and stick some stuff in the middle, boom, you’ve got an API. That’s what we’re going to do here. That may be obvious to a lot of you, but it wasn’t always obvious to me. Right, so we’re going to have action now. Now, there are three ways we could call this API sitting in API management. We could just use an HTTP action, we could make a call here, we could, you know, if we choose the HTTP action, we have to set up the method and the URI and various other things. So that’s a good option. Or we could not do that, we could have another action, we could call it from Azure API Management. And you’ll see when we go in there we can actually… There’s lots of them there. We could go into the instance and call the conference API from there. But when you do that, it’s better. It’s done some of the configuration, but you know, we still have to select the operations and they’re very much raw HTTP. Best option, let’s go and find our connector with a lovely red background that Paul gave us to use it [inaudible 00:31:12]. I can search for it. I’ll search for Matt. There we go.

Oh this, yeah, didn’t use the red background. There you go. That isn’t red, is it? There you go. I’m not even going to try and work out what’s happening there but there’s something wrong with the color I’ve used. But there we go. So all the operations we’ve got there, that we selected to publish after available. Speakers_Get here is available to use. I need to put in the key and I forgot to give Kev that, so I haven’t got it. But if I put in the API key, I’ll be able to call it. That’s how easy it is to create a custom connector. It’s just something you can go out back to your office when you get back Thursday, give it a go, see how easy it is and see how easily you can use these within logic apps.

So custom connectors then. Reuse the common components from the enterprise systems, make it really easy to the workflows based upon the things you use a lot, and take away that complexity from people so they don’t have to worry about it. It makes life really easy. You can service any operations you wish. You have a level of control in what you’re trying to do. You can package up that logic and then think about the possibilities here. The custom connector at the end is kind of the endpoint. You can have a web API, or a function, or another logic app calling a bunch of other systems. You can package these things up. You provide the custom connector as a way to completely abstract this logic and provide, you know, parts of your business API’s intelligence for people to use. And then one thing further, so if you’re an ISV, we can take this a little further. It’s a gorgeous shot of our docs. But actually, you can take your custom connector and turn it into a real connector. So if you’re an ISV, you have a service you want to be able to, you know, publish that so people can use it within logic apps, reach an audience that can use your software or provide an easy way for people to do integration with your software, you can turn your custom connector into a real connector that we use within logic apps.

So there’s a few caveats on this. First one is it has to be a SaaS service. And the second one, most importantly, it has to be your SaaS service. If you’re thinking, “Well, I could take, you know make a connector for a popular social media site and accidentally siphon off all the data, that was my idea and I’m working on it.” No, you’re not allowed to do that. It has to be your SaaS service and it has to support some common things like the way in which it’s authenticated and things like that. Once you have a connector within logic apps of your own, you can build template logic apps and give those to your customers and the people who are implementing your service to make it really, really easy for them to build integrations with your SaaS service. There we go, how’s that? We got the time back. So that’s what we had on Deep Dive on Logic Apps and thank you very much.

Download Presentations

Fill the form below to get all the presentations delivered as a single zip file in your mailbox.

please wait …..

Interested about future events?

please wait ...

Back to Top