Integrate 2016

May 11-13, London

Powerful Integration and Workflow Automation

Integrate 2016, May 11-13, London

Length47:28
kevinlam

Video Transcript

Jeff: I’m Jeff Hollan. We’re really excited to be here. This is kind of like Logic Apps Live hosted directly from BizTalk360. So, if you haven’t already watched that webcast you should. It’s trending. It’s becoming more and more viral on YouTube every week. I think we just passed Gagnam Style for the last webcast, right? So, it’s kind of a big deal.

Kevin: That’s right. We just need a couch.

Jeff: Yeah, I know, we need a couch.

Kevin: Okay, so what are we gonna talk about today? First, we’ll go into the modernization integration. What does that mean for Logic Apps, and how we’re taking you forward into this modern world of integration? Then, the backend that powers connectivity to all of the SaaS and protocols that you hear about through our API connectors. I’ll cover some of the high level architecture of the Logic Apps service, so you have a better understanding of actually how the service works behind the scenes. Then we’ll go into the definition language, cover some management, and then of course, do a demo.

So traditionally, enterprise integration has been at the hands of IT shops. They’ve been responsible for integrating line of business systems, building messaging infrastructure that the rest of the organization can use, and has always been a very challenging and daunting task to go ahead and engage in one of those projects to build integration systems. In the modern world, we have a lot of departments and individuals making decisions about where their data is going to live and the services that they use. They want to be able to be more agile to bring their business forward and ensure that they’re not blocked by IT and yearlong projects to enable certain scenarios.

So now, we have this task of being able to hook up both our traditional IT systems with the modern SaaS platforms, and even just amongst the SaaS platforms that are out there, how to integrate that data. And we’re having, and traditionally again, we had these… You know, it was scary to go ahead and do an integration project, you needed a specialist, you had to do long term project planning, and there was a big monetary investment to make that happen. With this modernization of integration, we want to democratize the ability for citizen developers to go ahead and not feel afraid to go ahead and do that, and be able to integrate their SaaS solutions either across SaaS or to SaaS on to the on-premise systems. And with Logic Apps, we’re hoping to bring that forward.

With that democratization of integration and this modernization of integration, you want to be able to trust that that platform can run on a service that you don’t have to manage hardware or think about deployment scalability, patching OS’s. So, you want to be able to trust an iPaaS. You have an integration platform as a service so that you can go ahead and deploy and trust that that system is actually going to do what it needs to do, will scale out to meet your needs, you’ll pay for what you use, and you know, especially as we’re trying to go into the Gartner Magic Quadrants, then you have the trust in the enterprise capabilities that that platform brings along with it.

And to be able to use this platform, you have to have a rich ecosystem of the connectors to connect to the things that you care about. There are some obvious connections that we have to SaaS services. The most popular ones we have out of the box, you know, two things like Salesforce and Office 365, Box, Dropbox. But you have more traditional protocols that you need to talk to, on HTTP, FTP and other protocols, and of course, to your hybrid on-premise systems, and we have to make sure that we’ve created connectors for all of those.

But even though we have dozens today and we’re building hundreds, that’s still not enough. There are systems out there that people need that we just can’t get to with the amount of workforce that we have. So, we’re going to open up that marketplace to third parties so they can publish their own connectors, so then it becomes available to you. So, either the services that actually host the SaaS and they can publish or connect with it to make it really simple to integrate or consultants and other people out there who want to take that initiative or find a better connector for themselves to use it and get out there. That allows them not only to have accessibility to those services but also an opportunity to monetize those new capabilities.

So with Logic Apps, we want to connect and automate the common tasks. You know, Jim was really careful about not saying simple, and I’d make a differentiation there. It’s not easy, but we do want to make it simple. You know, traditionally with BizTalk, it does take some black magic to understand the intricacies of the system, and once you have that, then you are a specialist and people will entrust you to go build those systems for them because you understand those intricacies of the system. But we don’t want to in our new system have to have the special sauce and understand how to demystify anything. They should all be out there and easy for you to understand.

So we want to build simple and intuitive tools so that it’s very approachable, and right from clicking the “Create” on Logic Apps, you can get going and build something very quickly. Even though I’m saying it’s simple that doesn’t mean that you’re not building crucial and reliable tasks in that system. Simplicity does not mean not reliable or not trusted.

Today, everybody’s connecting to these applications via web and mobile apps. We wanna make sure that those webs and mobile apps have a very easy way, easy path to connect to those systems, as well as where you host some of your web services that have functionality and connectivity to other systems and processes.

You want to be able to connect your existing systems and SaaS. We saw some demos earlier today where you can take your line of business system that’s connected either directly or through BizTalk and then connect up to any of the dozens to hundreds of SaaS services that are out there, and take advantage of the cloud flexibility and connectivity that we provided for you.

And finally, we have BizTalk APIs for expert integration scenarios. We understand that after over 16 years of building BizTalk that there’s a lot of IP in that capability for how to do messaging patterns, how to make sure that we can do rich scenarios like verticals for B2B/EDI, how to handle XML. Those things are important. There are a lot of systems out there that aren’t going away soon that still need to do that type of integration work, and we’re bringing those capabilities and functionality into Logic Apps.

So first, I want to talk about the API connectors. All right, this is the part that enables a lot of the scenarios from the engine. So we have a set of managed connectors. You guys, how many of you use the first release of Logic Apps? Many of you, right? And so, I’m sorry. And for you guys, you know what I mean. We’re very quickly able to get you connected to these systems but we put the task on you to go ahead and deploy these connectors and manage those connectors and figure out how to size them. And so, we didn’t really accomplished that task of really being a good platform as a service because we’ve pushed the responsibility to you. And so now, we’ve taken that back. We realized that that wasn’t the right way to go. And so now, we have managed connectors. So, we go ahead and host those connectors on our platform, so you don’t have to worry about those things anymore, you just connect to it.

So we have Cloud APIs and platform functionality built into the service itself. We have dozens of these built-in connectors, and as I said, we’ll have hundreds more coming just from our teams that are building those. They are hosted and managed within the platform. So what that means is that as you connect to those connectors, you don’t have to worry about, you know, is it going to meet my scalability needs or performance needs. We will make sure that that happens. You don’t have to worry about if there’s an issue with it. We are servicing those things. We have 24/7 servicing on that, and people wake up in the middle of the night to make sure that it’s running for you.

As I said, it scales to meet your needs as a first-class designer experience. So for all the connectors that we have, we abstract those APIs so you don’t have to understand the intricacies of those APIs, and we give you a nice designer experience so you can intuitively go ahead and code against them. And that allows you to do rapid development. So things that took you months to go build and reqs you how to do with your department, so you can now very quickly go ahead and pick up those connectors and start connecting with them. We met with many customers his weekend, and even for what seems like a very simple Logic App that had three or four steps in it, the amount of power that they give to actually connect these various systems without having to understand all those intricacies and the engine behind it was really empowering for them.

So, now that we have these connectors, how do you actually go ahead and configure and connect to them? So we have, instead of you having to manage the connector and being deployed into some web app and you have to manage tokens and very strange things. We’ve taken that all away from you. So now all you have to do is create an API connection, and that connection is merely some config, you know, depending on the type of connector that you have, it’s either an OAuth Token that we’ll manage for you or it’s a connection string to a particular database or other configuration aspects of a protocol or system that you need to connect to.

So, you can authenticate or configure once and then reuse that connection. You don’t have to cross different Logic Apps. You can reuse that connection. You don’t have to go ahead and reauthenticate or reconfigure those nodes. You can utilize the connections to have differentiation between the usage of that. You know, one aspect isn’t connecting to two different databases in my Logic App, it’s just a different connection string in a connection. I wanna be able to tweak configuration parameters about the timeouts, etc. You can use different connections for that that you manage. Or I wanna have different connections for my dev test, UAT, pre-prod, prod. All right, those are just different connections and configuration that get deployed with your Logic App.

Jeff: Yeah, really, this was a big shift between V1 and V2, and we’ll dig into this more later as we go throughout the sessions. But just how your Logic App now only has to reference kind of that config for how to connect to an API, you don’t have to worry about deploying that entire API, it really adds so much flexibility and power to both your development time and design time but also the deployment story, which we’ll go into.

Kevin: Jeff is gonna show a very cool demo later about how easy that’s going to be to push the different environments.

Jeff: I heard headline, it’s gonna be the best demo of the whole conference, right? Yeah. Suck on that, IoT. No, just kidding. The IoT is just very cool, that’s coming.

Kevin: So, what are the out of box connectors that we have? This story changes almost every week. We are constantly deploying new connectors out into our backend. So the ones that we have there today, I have a long list of the SaaS connectors from Azure Blob to Service Bus to Box to Bing to SQL Azure, Trello, Twilio. If you guys have been using Logic Apps you’ll notice that there are some ones on this list you might not recognize because they’re brand new. I think, let’s see, Trello, Wunderlist, a couple other ones, GitHub, they all just got lit up in the last couple of days or last week. And of course, we have the protocols, so we have protocols for HTTP, we do Webhooks, FTP, SFTP, connecting to we have Delay and Workflow and you’ve seen another new one in that list for RSS that we natively now support.

So, in our shortlist of upcoming connectors, and I say shortlist because you know, as I said, we’ll have hundreds of these connectors coming up. But I wanted to give you a preview of the things that are in the short term backlog. You know, the SaaS ones, we have Instagram, outlook.com, UserVoice, ZenDesk, Google Mail, Lithium. So, one of the things that Jim had talked about earlier is that we have Microsoft Flow that was introduced a couple of weeks ago, and Microsoft Flow runs on Logic Apps. And so, there’s a lot of motivation to get more and more of those connectors that use the same connectors that we do because it’s our engine that they’re using to go ahead and light up the scenario. So, there’s a lot of motivation to build a lot of these connectors to go ahead and make it available to Flow, so you’ll see, there’s a lot of momentum behind that, and you’ll see those type of connectors lit up.

Later on in another session, we’ll talk about the hybrid connectivity in the managed set of connectors. We had some of the hybrid connectors in that V1 world, where again, you were responsible for creating separate hybrid connectivity connections and uploading things to IAS and your prem, so we took all that stuff away. So, we’re gonna have a nice hybrid integration story for Logic Apps, and we’ll talk about that in another session.

And then finally, BizTalk Messaging and B2B. We have a number of capabilities that’s already in the product, and we’ll give you highlights to some of the things that lit up, and you saw some of it in the demo that Jon did. But you know, we’ll have XML validate, you know, as I talked about, we’re first-classing XML in the product because XML is a critical message handling for integration systems, B2B with X12, EDIFACT, AS2 and then Party Resolution, so we can do a full B2B/EDI stack. And again, in another session, we’ll actually go into more details on these parts of the components.

Man: Supporting BizTalk Maps?

Kevin: Yes, BizTalk Map. Again, that’ll be another session. So, the session right after lunch, we’ll actually go into the fun details about the things that we’re supporting from BizTalk and Logic Apps. I just don’t wanna spill the sauce here in this talk. But I know how, you know, with this crowd, how important and exciting that is, so I’ll just leave you stewing on that through lunch.

So, I talked about these out of the box connectors but there are things that you wanna be able to connect to with custom APIs. You wrote your own API, we have a great app service platform that allows you to create web apps, API apps, mobile apps, and we want to enable connecting to that code that you wrote and hosted in that app service platform. So, we have, for those, we’ve utilized the power of the app services to go host those. If you expose some Swagger and enable connectivity to those for us, then we will light that up as a first-class experience. We’ll auto-discover all those APIs in the API apps inner designers, so then you don’t have to go and figure out how to wire those things up. And we’ll give you that first-class designer experience. So, even though it’s not our connector, we will still give you a great experience for being able to connect to that API app, and it’ll feel like a native connector in our system.

Azure Functions. So, you know, Azure Functions was announced at Build, and at Build, we also announced that we have integration with Azure Functions natively in the product. So, just like with API apps, we do auto-discovery of your functions, Functions allows you to write some code, you can actually write code inline, or you can have functions that you’ve pre-created, and we’ll find those and be able to call them. So, you can write a little bit of script or code. If you’re familiar with BizTalk, you had your little script thing that you could do or a call-out to code, and this makes it really easy to go write a few lines of code and make it available to Logic Apps without having to write a full API app and deploy it. Serverless code is pretty cool.

And then finally, Nested Workflows, this is something that we released recently, is the ability to discover other workflows that you can now have nested workflows. Now, you can think about componentization of your workflows and reusability of those workflows because now we’ve broken those up and made it really easy to call those and discover those. You can always call a nested workflow via the HTTP call to a request or manual trigger. But there are some limitations there. You have to understand the URL, there are request timeouts for that connection, but with nested workflow using the workflow action, we respect our back rules for those resources because the workflow is a resource, as well as allowing it to have a long running request and you won’t timeout, you don’t have to worry about those things. It will just complete and will react appropriately when it completes.

So, you know, for things that are not more first-classy items that you can connect to in the product, we still want to be able to connect to services out there that we don’t first-class yet. So, the first way to do that is HTTP + Swagger. So, if you have API in the wild, the rest then point out in the wild, and you wanna be able to call it from your Logic App, if you have Swagger for that API and point, we’ll treat it as if it’s a first-class action or connector in our service, so you will get that rich experience. So, you can do an HTTP + Swagger, and then we’ll discover that and handle it in that way.

Jeff: The other nice thing about this HTTP + Swagger is even if it’s not your API, any API that either exposes Swagger that you can write Swagger for you can use. So, if you’re not familiar with Swagger and the Swagger metadata, if you go to swagger.io, they actually have a Swagger editor. And just last week, I wanted to use an API from Google, and I don’t wanna have to write a custom API, I don’t wanna have to write a function, so I just went into Swagger and really quickly just authored, here’s the shape of their API, uploaded that Swagger document to my Logic App, and now I had a first-class shape for this API completely codeless. So, if you just have that metadata so that the designer knows the shape of the API, you can incorporate it into any workflow.

Kevin: Very cool. And then of course, even if you don’t have the Swagger, if it’s an HTTP reachable endpoint, we have our HTTP action, you can go ahead and talk native HTTP RESTful to that endpoint and we support that. And finally, we have HTTP Webhooks. So, as an action, you can go ahead, and within the middle of your workflow, go ahead and subscribe to a webhook on a service and then wait for a response back. And so that makes it really easy to be able to have long running workflows that are waiting for responses from external systems.

So, the next thing you want to do is trigger a workflow. All right, so we talked about all the actions and connections that you can do, but of course, you know, it’s not very interesting if you don’t have a new instance of workflow. So, the first way that you can trigger an instance of workflow is you have a recurring schedules, so you can have it start every hour, once a day, every three months, and you can have that just recur on the schedule that you care about. Today, we have some simple scheduling, but in the future we’ll add much more complex scheduling to that recurring schedule.

Man 2: Wait. Can it be real time?

Kevin: What do you mean by real time?

Man 2: To be like from Salesforce to Dynamics CRM if an event happens, you call out the…

Kevin: So, the question is, is it real time?

So, we have triggers for particular connectors and those triggers will go ahead and reach out to that service, and there are some that are polling and some that are push triggers, and they’ll inform us when a new item comes. And so, that’s on a per connector basis that if they implemented some type of trigger. But if they not, then you can use your own polling mechanism to go ahead, and which is what I have here, it’s pulling an API. So then you can have something poll on an API every minute, every 15 seconds to say, “Hey, is there more work for me to do, or is there a new event that I’m interested in,” you can have filters. And as a matter of fact, the trigger in the trigger implementation we have, you can even have a state across the trigger calls, so that you can only pick up the things that you care about.

HTTP POST Request. So, we have what’s called a manual request trigger, so then what that does is it creates an endpoint for you to call into your Logic Apps, so then it acts just like the REST endpoint, do a POST against it, and a new instance of that Logic App will get run. And with that request, we also have now enabled you to define a schema for that request. And once you define a schema, then you can treat that incoming message as a first-class message, you can then tokenize and utilize in the rest of your Logic App.

And finally, webhook subscription. So, I talked about a webhook action in the middle of your flow, but we also have a webhook subscription so you can say, “Hey, you know, set up a webhook subscription to GitHub,” GitHub has webhooks, you can subscribe to it. And when a new build comes out or a thing gets checked in, you get notified and you can continue. And of course, we also have manual, in the portal you can manually start an instance of a Logic App to see how it works.

Architecture. So, I wanted to spend a few minutes on architecture so that you have an understanding of how this works as we talk about different components, you’ll understand how that fits into the model.

So at the top layer, we have the resource provider. So, Logic Apps are a first-class Azure resources. We have a namespace, microsoft.logic/workflows, workflows is the resource and microsoft.logic is the namespace. And so, we sit behind in our ARM, the Azure resource manager, so that’s a first-class resource, and with that, and we get a management plane. So, for doing resource CRUD workflow is a resource. You can deploy, you can use all the ARM facilities that you have for our back, tagging, auditing for all those resources, and then you can manage it in that way.

Through that same interface, you can get your tracking info. So, for a particular run, you get run instance information through that management interface, as well as definition validations. So, when you push a new Logic App, that front end will actually do the validation to see if it’s a valid workflow. I just wanna be conscious of the time.

So, we have on the data plane, data plane allows you to have the direct access without having to go through the multiple layers of ARM and the different authentication mechanisms that it has so that you can do things like invoke a Logic App directly. So like I talked about the request endpoint, that’s actually a direct access endpoint through our data plane so that you can quickly invoke a new instance of a Logic App. We also have the access to the inputs and outputs of every action. You get a SaaS drill so you can actually get those messages and not have to pull large messages through management planes.

The next layer, we have our execution engine. That execution engine that has its sets of job queues, run information, and inputs and outputs. But it’s responsible for actually executing your Logic App, that’s the thing that’s running it. So, your Logic App, we really have an inference engine that’s understanding what the dependencies are across your actions. We give you a very nice designer so it looks like a sequential set of steps, but it’s actually a much richer engine in the background. All right, this engine that we have is the same engine that powers ARM that does millions of deployments a day. So, we’re using that same power and you can then trust in that engine for doing that.

Some things that are important about understanding what the execution engine does is that, you know, not only does it do new instance activation, but it does the scheduling for the actions. So, we’re a highly parallelizable, so if there are no dependencies across the sets of actions, we will parallelize those actions and our backend will go ahead and run all those in sequence. But then, it also understands the scheduling, and if one action needs to wait for one or more actions to complete, it’ll understand that and schedule at the appropriate time. But there’s another part, which is these actions.

So, I’ve talked about connectors, and we try not to differentiate them too much. But there’s understanding that you have is that the engine is responsible for invoking the connectors. So, it will reach out and talk to our connectors and I’ll talk about that host in a second. But we have as the actions, the HTTP action, the workflow action, wait, response and other actions that are built into the engine, those are actually running in our backend, and this point will be important later when I talk about some IP addresses.

And then finally, we have our connector host. So this is, you know as I said, you don’t have to manage deploying those connectors anymore, we do it for you. They’re all pre-deployed in a highly scalable backend system, and so we manage those connectors for you. So, I only listed some of them because they don’t fit. So then, you just have connections to these and connections is another resource called microsoft.web/connections, and it’s just your config, and it’s config to those endpoints.

So, this is where your, you know, some of these are codeful connectors and some of these are codeless connectors, so then we manage the pass-through of some of these to Twitter, for example, a codeful. We’ll actually do some manipulation in message before it goes. But we don’t have to worry about that, we take care of that for you.

In the next part, this is where it becomes interesting in that distinction between the two parts. There are a lot of scenarios where customers want to be able to whitelist some IP addresses when they connect to their systems. And one of the things that we will be doing for the engine is providing static IP addresses. We don’t have that today. That’s on our short term backlog, but all of those request that, for example, for an HTTP request, if you have an HTTP action, that’s being initiated from our backend engine and will have that static IP address.

Man 3: When?

Kevin: When? Soon.

Jeff: Short term backlog.

Kevin: A short term backlog. So, in the connector host, we have…it’s another piece of a layer of our system. It, today, has static IPs. So, if a connector is reaching out to some system, you can trust that that already has a fixed IP, and we have that published, and we can give you links to those IPs. So that’s there today. So, I wanted to call those out because people get confused about, it’s like, “I thought you had it, or I didn’t have it.” So, we have two parts of static IPs, and we’ll provide those to you.

Jeff: The other piece I’ll just call out in this map, because it’s worth noting, oftentimes when you’re working with Azure resources, you are used to looking at things like, “Okay, here’s my CPU limit, here’s my memory limit, here’s what I’m working with.” The execution engine that we use for Logic Apps actually can live outside of that entirely.

So, one of the things I love doing is I love watching our telemetry. We have some very cool tools hooked up to kind of like Azure Data Lakes and other pieces, so that we can see real time what’s happening. And every now and then, we’ll get a customer or two, and more often than not, and we’ll start to see these huge spikes in there of usage. Now, this execution engine as Kevin mentioned is that same engine that powers Azure resource deployments across all of Azure. So, it scales up and it continues to kick off those actions and start those new workers as fast as it needs to do, and so it’s able to handle all of those requests incoming.

Today, with the billing, there are throttles that we have in preview, but I just wanted to call this out that if you’re wondering, “Hey, can Logic Apps even handle the workloads that I’m passing?” Because of this architecture and because of the execution engine, and the way that it spins up those actions to distribute the work and parallelize it and do all those other awesome things, it scales up extremely well and extremely rapidly, so you don’t ever have to worry about it. You just say, “Here’s the work I need done,” and we will make sure it gets done for you.

Kevin: That’s right. So we have, you know, in this layered architecture we actually have, it scales out in a fashion that can handle the amount of web requests that are coming in. At that middle layer, that’s the execution engine, we have a different scalability means by which we pull out stems and we size that to the need of the incoming systems and they scale up and then we scale them out as needed.

And then finally you have your connector host, which then has a different scalability factor for as more of those connectors get used, those systems will automatically scale out as well. So, you don’t have to worry about our ability to scale out your needs, that’s our job, and we’ll make sure that happens.

So, one of other things that we have is each of these, the services are provided everywhere that is publicly available in Azure. We’re almost there today, but very soon we’ll be in every data center that’s out in Azure today. After we get to the public Azure ones that we’ll work on the sovereign data centers, which means U.S. gov, Germany and China. That will be coming out soon.

So that means you can upload your Logic Apps to locality that you care about that’s close to the services or the people that are using them or where they’re at. It also means that you can more reliably push your Logic Apps to different regions. If you, you know, don’t have faith in Japan, you wanna push it to Hong Kong. You’re perfectly acceptable to do that as well.

Jeff: And all the architecture happens inside of that region as well. So, one of the questions we get is, “Hey, if I’m using this data center, am I getting data pushed across the different regions or whatever?” That doesn’t happen. We keep it all isolated inside of that region.

Kevin: So, your data stays in that geo region. It doesn’t get pushed out. So, Logic Apps designer.

Jeff: It’s beautiful now.

Kevin: That’s right.

Jeff: Much more beautiful at least. Yeah, so much better. That’s worth it honestly.

Kevin: So, we’ve heard your feedback, and not only have we heard your feedback of making that designer better, and Jim stole my thunder. But we’re having Logic Apps in Visual Studio.

Jeff: Finally.

Kevin: Yes. So, our guys are working on that right now. It’s a very hot work item, and they are working on it right now. So, I don’t have the bits but I do have the ability to show you what’s coming up and explain what we’ll be doing.

Jeff: We honestly emailed one of the developers who’s working on it, like, “Look, we know the bits are rapidly evolving and it’s not in a state you want us to show, but can you take screenshots?” So, it’s funny. If you actually look at these screenshots, you can see the little screenshot tool he used yesterday when he took all of these. So, these aren’t mock ups, these are screenshots that the developer’s nice enough to take.

Kevin: So, you know, what we’re trying to communicate is just around the corner. We will be having a preview at this and get it into your hands really soon so that you can start working on it. You know, one of the things that is important to us is being able to enable those scenarios in Visual Studio. So, not only can you stay in the tool that you’re used to for doing development, but also it allows you to do CICDs and that you can integrate it with TFS and get the power that you have with Real Studio and the ability to go ahead and store those Logic Apps someplace in a reliable fashion that multiple people could work on.

Man 4: It can also support versioning?

Kevin: Sorry?

Man 4: Does it also support versioning?

Kevin: Version is coming up soon.

So, one of the things that we wanna do is make sure that because it’s an Azure resource, we’re going to work it with the ARM deployment template, so we’ll have a first-class experience of Logic Apps being embedded into an ARM deployment template. So, here, and Visual Studio already has this ability today to go ahead and create a cloud project for an Azure resource group. You go ahead and select the Logic App and what happens is it will go ahead and create a project for you and then you get an ARM template with a Logic App resource already built into it. If you go ahead and double-click on that, we will recognize that it’s a Logic App that’s in that template and bring up the designer for you. And we’ll have a default Logic App there just so you can get started, but we will recognize it’s a Logic App, and now you have the designer in Visual Studio. And it’s the same designer that you have in the portal, it’s not any different.

So, you can go ahead and then change that and work in the Visual Studio just as if you were working in the portal today, and you have all the same capabilities of connections and discovery of connectors that’s out there. And of course, along with that you have the code view. Right? So, you’re in Visual Studio now, code view is much richer and you can get all of the JSON schema validation, and the next clip that we’ll work on is doing IntelliSense for functions and giving you that power to work in that JSON inline.

Now, if you wanted to go ahead and actually look at the resource template itself as opposed to just the Logic App definition, we still enable you to go ahead and bring up the template, the deployment so you could see all the ARM deployment configuration information that surround it and how they get deployed. So you can go ahead and now check that into your source control and deploy it right from Visual Studio, so then you get that control from there as well. And that leads really nicely into our workflow definition.

Jeff: Perfect, awesome. We’re really excited about Visual Studio integration. It wasn’t too long ago that I walked into a conference room, and the Visual Studio was open, like the IDE was open, and I saw the designer there, and they were adding connections and doing these things. And I seriously just stopped, and I was like, “Honestly, this is like one of the best feelings I have right now is finally seeing this up and running.” We’re really excited to get this into a state that we can push to you, and let you all start playing with it as well, and it is, as Kevin mentioned, coming soon.

So, I wanted to go into some of the workflow control methods that we have today and some improvements that are coming very shortly. So we have a number of different control flow elements that are possible to accomplish in a Logic App today. We have like a response action, which means if I have a request trigger and you have a response action later in your workflow, that will now become a synchronous workflow that will wait to send that response back until it gets to that step, and send the response back that you want.

So, this is a pattern that I’ve used that if I’ve got time in my next session I’ll show you. But I’ve actually built an app that needs to integrate with a number of different systems including on-premise systems, so when I need to get that data, what I’m actually doing is I’m calling a Logic App that has a response action. It goes and grabs the data or sets the data that it needs and sends that response back to my app with the data that it retrieved, and now I have that data in my application. So that’s a very useful pattern that you can follow.

We have looping today, For Each loops and Do Until loops. These are associated with actions, so you could say, if I had a list of email addresses, you could say, “For each email address send them an email.” Until as well, if I’m calling an endpoint that maybe I’m worried about reliability, well we do have retry policies built in, or I’m making a call and I wanna say, “Hey, continue to call this endpoint until I get back the response that status was successful,” right. You can put that inside of an Until loop.

Split On is an attribute of a trigger. So, if my trigger was receiving a batch of items, at the trigger level, I could define the Split On. And so, if I’m getting a batch of customers, at the trigger I could say, “Split On customers,” and that’s going to debatch into a number of different parallel workflows, so that now you can deal with each of those individual records. A number of our connectors today actually used this model. It’s very useful.

And then finally, we have Conditions, which again today are associated with an individual action, to say like, “If it was bar, then execute this action,” and I could have a second action, which is, “If it is not bar, then execute this one.” Now, this is great today, and it enables a lot of patterns, but we’ve got feedback, and we understand as well, as you start working with more complex workflows it gets very difficult to pattern things together.

For instance, if I have a For Each, if we use this example and I have a list of email addresses, but maybe instead of just needing to send an email to each customer, first I need to get the email address and do some data transformation. Then I wanna get that email address and add it into a SQL database. And then I wanna send the email, right? If you were using that in the Logic App today, it’s possible but it’s difficult. You have to do For Each’s with For Each’s and Ranges and it’s hard to do.

So, one of the features that is very close to being released into preview form is a new concept and a new way to nest actions within each other, so we’re introducing this concept of Scopes, which is a collection of actions. So, at the very basic sense, you could have a scope which is, I have an action which is a scope, and it contains steps A, B, C and D. This can be useful for a number of reasons. Maybe I just wanna categorize and collapse things with different pieces. I can actually, let’s see if this is working. I have… I don’t wanna have to type a long password. Again, I’ve got this running in the dev box right now, but this is not…

Kevin: He’s gonna go off script.

Jeff: This is not the final version. I just wanna show it because it’s so nice to see. But this is an example on the dev box right now. So, the UX elements aren’t there, I know it’s not the prettiest. But you see here that I have this concept of a scope. I could continue to add actions, I can also continue to nest scopes inside of it, and collapse these in Manage. So that concept that I just want to show you really quickly in that.

Kevin: So, what that means is that with scopes, not only do you have these control flow things on top of it, but now you can do exception handling across the actions in your scope and have better error handling as well.

Jeff: Yeah, that’s right.

Man 5: Is that similar to what we had in BizTalk? BizTalk orchestration scope?

Kevin: I won’t equate those two because scopes has a very, they have certain behaviors in BizTalk that’s different in Logic Apps.

Jeff: Correct. Yeah. And we can get into that potentially later. So, with that as well, you could do a For Each loop, and that For Each loop is now a first-class action, and you could have a number of actions inside of it. So, now, in that scenario, if I have an email address and I need to do a number of different steps with each email address, it becomes much easier. Same with Until and Conditions. I know one piece of feedback we get a lot is I don’t just have an If-Else, I have an If-Else,If-Else,If-Else,If, then blah, blah, blah, and I have kind of this switch and routing,” which, again while possible today, takes some rocket science almost to wrap your mind around, and now you can easily just nest those If-statements and Else-statements all within each other. So that’s coming very soon in the next few weeks that you’ll be able to start utilizing some of that power.

The other on, I’ll just highlight really quickly is an addition to the number of connectors that we have. I just wanna check the time. We also have a lot of workflow definition functions that you can use inline. Another piece of enhance that I was hoping was gonna get deployed this week. But it got caught up on one test case, so we got to tweak it just a little bit more. But if all goes well, it will come in the next week or the next deployment. It’s the ability to author these workflow definition functions within the designer.

So, today, if you noticed, yeah, if you’ve used the new designer and you start typing a function, sometimes it escapes the @ sign, so we’re taking away that behavior, so that if you need to convert data between XML and JSON, or you have a simple ternary operator running to do an If, I can do an @ sign and start to do those conversions. Some of the new ones we’ve introduced, like I’m highlighting here, is like an If, an XML, an XPath converting between JSON and other data types like binary, data URI, string.

And a new one that we’re working on as well is this Result, which is very useful again for this exception handling scenario. So, if you imagine I have a scope of five actions, and outside of that scope I have an action that says, “Hey, I need you to fire if my scope failed.” Well, it’s one thing to know that my scope failed, but I also want context and understanding what was it that failed, what were the inputs, what were the outputs, maybe I need to act on it or I wanna be able to log that or send notifications as needed. So, we’re gonna surface this function called Result, where I can say, “I wanna know what the result of this scope was. Give me all of the failed actions,” and I can iterate over that to understand what were the inputs and outputs and better handle errors within my workflow.

Okay, management piece. This is an exciting one that we’ll probably close with for now. We understand that one of the big focuses I have for this integrate conference is as much as you all wanna see it, I’m not gonna be doing any Twitter to Dropbox demos, right? I wanna figure out… I won’t, Charles will. No, I just kidding. He’s gonna be talking about Microsoft Flow. We really wanna make sure that Logic Apps are an integration tool that isn’t just cool to demo and say, “Hey, check out these awesome things we can do on the fly.” But there’s another story to it as well.

Once I have the Logic App, I need to make sure that I can make this operational. I need to be able to manage that Logic App. I need to be able to deploy that Logic App. I need to have source control and all these different aspects that are so important to using this in an enterprise. So, that’s a huge focus of our investment as we go forward, which is why we’re promoting things like Visual Studio integration. We have a lot of improvements coming to our telemetry as well.

So, today, there are many ways to get the data. We store state at every single step of your Logic App. So, every time an action executes, we know what the inputs were and what the outputs were. You can view all of this data through the Azure portal, through the API and the STK. But one of the things that we’ve been working hard to do is integrate natively with Azure Diagnostics and have this idea of attract property.

So, let me actually, I think I have this here. This is available today, unbeknownst to a lot of people, is if I come down here to these charts that are disabled by default, there’s a service in Azure called Azure Diagnostics that like cloud services used in BMs and networking use right now, it’s really powerful and helpful. I can go ahead and turn this on and subscribe a storage account that I have in my Azure subscription to some different events. So I can get today, workflow runtime events and metric events, and what that’s going to do is anytime we have any of these events in the Logic App, we’ll push it to the storage account. I’m just gonna show you really quickly what one of these events will look like in this new update that we’re deploying and some of the data that we’re gonna be pushing.

So, for every single time that there’s an event, we send a record like this. In this case, this was an action started event, so there’s also action failed event, all these other different type workflow started, trigger started, all those fun stuff. So, this is a workflow action started event. We’re passing in a number of different identifiers to help you understand what was the run that this was a part of, what was the resource that this was a part of. We also are providing some correlation IDs.

I’m just gonna call out really quickly, there’s a concept of a client tracking ID, and the reason it’s called a client tracking ID is because you can set this ID. As I’m triggering a Logic App, I can say, “Here’s the ID and the GUID that I want you to use in your telemetry, because maybe this was a workflow that started in BizTalk or somewhere else, and I have an ID there,” I can pass that ID into Logic Apps and we will honor that ID and do it throughout your entire workflow and your solution through nested Logic App, through other pieces that you call into so that you can correlate those across an entire solution, not just a Logic App run.

There’s also this ability that you can start marking properties as tracked properties. So if I have an action that has something like this or I have an invoice number and a customer ID, that’s some information that would be really useful in my metrics to have, because not only do I wanna know that a workflow failed but I wanna know what were some of the identifiers about that workflow. So, we have the ability for you to track certain properties of actions, and we will push that through the diagnostics pipeline.

Now, this is great in and of itself, the fact that now all this data is in a storage account that I can understand and parse. But we don’t want it to be even that difficult for you. So, there’s two big pieces that are coming very shortly in the next few months that are gonna make this even better. One is the ability that as these events are coming into your storage account, you can also at the same time push them into an Azure event hub, so that you can stream them through Stream Analytics, tap them into other services that you need.

But the other one that I will show you now is through a tool called OMS. I don’t know if how many of you are familiar with the Operations Management Suite, but we’re working with them right now. This is a really basic dashboard that I created on the flight over, where I have a Logic App connected into OMS. And they’re building a custom dashboard pack for us that’s gonna be coming out in the coming months, where you could have all your Logic App information right inside Operations Management Suite, being updated as those events are being admitted. So, I understand, where are my runs, what has succeeded, what has failed, and those are those exact metrics that I was showing you in that form, but these are entirely indexed and stored by the Operations Management Suite.

So I could come into this log file, and if I switch here, you see here’s that exact data I was looking at, but now I have information to easily filter and say like, “Hey, I actually just wanna know where the operation name is this certain field.” And you’ll notice as I’m writing this it’s autocompleting to understand here are the options that are available, and I can easily sort and filter. And with the ability to track properties, our hope is that when you have this Operations Management Suite and you get a customer who says, “Hey, I never got the data that I needed,” you could quickly open this up, and in a single query understand exactly what happened in that workflow, what steps are successful, what steps failed, and so on. You’ll also notice up here… what one?

Kevin: Across workflows.

Jeff: Across workflows, that’s right. This is all my data across any workflow that I have. You’ll also notice I’m just gonna call this out, OMS has some great integration to export to Excel, export to PowerBI, and set up alerts.

So this is just a little bit of the stuff we’re working on to make that story better for management because we don’t want these to just be cool demos that we show, but we want you to be able to use these integrations in your enterprise and be able to track and understand that data. So with that actually, I think that’s kind of the big one. I’ll do part of demo in the next one, but I’m just gonna put up this slide here. So, I mentioned diagnostics, OMS, centralize your telemetry with OMS in the event hub piece. I’ll do my Logic App demo in the next session that starts right now. But for this one, again, here’s the plug to reach out to us.

Feedback is huge, and I hope you really do know that we don’t just say we want to hear from people, but we actually do love engaging with customers. If you’re on the forums or if you’re making comments, you’ll notice a lot of the same people who are giving these presentations, those are the names that you’ll see popup in replying to these email addresses. We very much want to be in touch with you. We wanna understand, “Hey, that’s great that we’re doing this, it would be even better if we did that,” or validation into what we’re doing or other features that you want. That’s so important to us as we manage our backlog and prioritize coming up to make sure we’re working on the right things.

Kevin: Yeah, please connect with us. We’re there for you to make our product better and to give you a better experience.

Jeff: Okay.

Kevin: All right.

Jeff: Perfect.

Kevin: Thanks, guys.

Interested about future events?

please wait ...