Integrate 2016

May 11-13, London

Advance Integration Strategies

Integrate 2016, May 11-13, London
Length 36:10
jeffhollan

Video Transcript

Jeff: While we take a, like while I brief, I was actually going to build a Logic App in that last session. I’m just gonna build it really quickly because I need it for my next one and I don’t want to go over too much in time. But the scenario I’m gonna work with, for those of you who are here, and I’ll just jump into this before I actually invite Howard on stage, is to I’m gonna get customer data for this scenario. So, this is the scenario I’m gonna be working with for the deep dive and we’re gonna take this through some of the stuff we talked about.

We’re gonna say that we have a system that’s publishing some customer data, and in this case, I’m gonna use the manual trigger because I love the manual trigger, because it allows you to find a schema that the designer can use. It’s actually a concept that we want to branch out as we can so that you can define schemas for other things, but I won’t get into that. That’s a backlog item, so I’m not making any promises on dates.

But for instance, I know I’m gonna get a customer name, a customer e-mail, where it was referred by, maybe their plan. One little tidbit that I will tell you right now is if you’re using that request trigger, is use a tool like jsonchema.net. This thing is awesome, like I really wanted to put this in the example to use this website. We’re hoping to bring this functionality into the request trigger itself. But you see, I just pasted in what the sample payload was and I could generate a schema, and this tool will generate what the JSON Schema was. Now, why that’s valuable is because when I paste that schema into my manual trigger, you’ll notice that for my next step, which in this case I’m gonna insert into a SQL database… Let’s insert a row into our customer’s database and we will do this one. As it gets the data, what you’ll notice, and I’m actually gonna show you this feature that’s in this environment that I’m using, I can say @GUID now and that works now. It’s is valid to generate a GUID. But you’ll see, I actually have all the different attributes that are going to be passed through this request, and the designer was able to look at my schema and know, “Hey, I’m gonna have things like a customer name.” I almost clicked customer e-mail. “I’m gonna have a customer e-mail. I’m gonna have a phone number.” So, it’s a huge help to use that, and then it’s generating all the stuff in the back end for you. The last thing I’m just gonna have this Logic App do actually I’m just gonna leave it there and we can get to it in a little bit.

So, let me introduce this next session really quick. It’s kind of a continuation of the last one. So, what I want to go over in this, is just kind of more of that vision that I was talking about of what I really want people to take away from this conference is, okay, let’s take what was a demo which was gonna be a demo I built in the last session. Let’s take that and figure out how do we take this to the next level. What if I’m leaving the happy path? What happens when I have scenarios or problems that aren’t easily solved by a Drag and Drop designer? So, I’m gonna talk about some patterns in operations that are extensible there. Talk about how you can create deployments in release management and what that story can look like in Logic Apps today. But before I do that, I actually want to invite Howard Edidin on stage.

Howard is a partner of ours that we’ve been working with. We’ve been working with a number of you. I’m not picking favorites here. There’s actually stairs on this side if you need, as well. The reason I invited Howard, he’s been working with Prescriber 360 over the last few weeks to build a use case for Logic Apps, and I just actually want to invite him and give him the opportunity to explain what he’s been able to do and how Logic Apps has helped with that story. So, I think this mic is on and you can go ahead and use that.

Howard: The project is based on a large pharmaceutical company, international pharmaceutical company and that was currently using Salesforce. We have a product called Prescriber 360 which is based upon or actually added to CRM online. And the project required synchronization of Prescriber data between both Salesforce and CRM. Basically, the requirements were migration of scheduled record processing from real time. They were virtually looking at BizTalk and we kind of convinced them that Logic Apps would be the ideal platform. It would be the simplest to integrate because we had a very short timeline in getting this to production.

Specifically, our product is called Prescriber 360 but I’m referring to it as CRM in this presentation. Salesforce to Prescriber 360, this is a process basically where we’re using Salesforce as nullification services, which will send a record out based upon some new record or a modification of existing record. And in that sense, a nullification service is really a polling type of…will get a nullification, and then it’s based on a trigger, go out and get the records. And in essence, we had to create a separate Logic App for each one of those. I think this is reversed on here.

Jeff: I might have done that, Howard. I’ll take the blame, my copy-pasting abilities.

Howard: That’s okay. It’s probably mine, but I probably sent you an older copy. But I like to visualize at first, I use business process modeling notation. I’ve been using this since early BizTalk days because it’s easy to understand, and both technical and non-technical people can understand it, so you can present it to a client or a customer on how the process works, so they can easily visualize it.

With CRM, basically, well Salesforce to CRM there’s basically we gotta deal with a parent record which is the Salesforce, the account, and then child records which would be addresses, other types of information. One of the issues we had with Salesforce is that when you update or create a record in Salesforce, especially a child record, we would not only notify you of response back in that child record but also send you a response back from the parent record.

So, you have two responses coming back. Parent record, of course, we try to ignore and we push it somewhere off to the side, and we want the idea to coming back in the child record. So, this is a round trip going from Salesforce to CRM online and back again to Salesforce with an acknowledgement because we had to use a correlation ID, a Salesforce ID in CRM and a CRM ID in Salesforce, so we can easily handle the updates and synchronization of records. I got 30 seconds left?

Okay. Prescriber 360 to Salesforce. CRM doesn’t really have a nullification services but again like Jeff uses, I like manual triggers because I exposed a manual trigger with just four fields and CRM will consume that, and every time there’s a new or modified record, it’ll send me a fire this off in my trigger and I go back out and I’ll pull that record and send that directly across to Salesforce. This was a single Logic App with, I think 25, maybe 50 steps in there, but it’s quite complex.

And due to some limitations with the designer, I actually coded the whole thing in code view which I found was much easier. I use actually Azure resources.com which gave me a full schema and everything there, including all the connection data because, at times, I would lose my connection. I have to go back and then change it. It’s a lot easier, I found, working in code views especially when again, this is a preview Logic Apps, and the designer had some inadequacies about, especially when I had multiple predecessors where I wouldn’t get the attributes available, so I couldn’t map it directly in there.

Also, working both ways with the CRM and for Salesforce, they published a whole schema, so you’re able to see the whole schema, but you only want specific fields there. And of course, if you’re looking for data in the field, and there’s no field existing, you get a runtime exception. So, I had to go back in on my code and add a validation, if it’s null, ignore it, and that simplified things. Going a little further, again this is a little bit plug from my company, and it’s basically is a full-service application for pharmaceuticals. Right now, we just published this to production early part of last week, and client’s going through some testing on it, and basically fields are missing and such like that. But it’s really production ready.

And the advantage of using Logic Apps is we’re very fast to production. I mean, I developed this over a period of time, but if I had all the knowledge and information that I gained from Jeff and the other team members, I could have pushed this out in a week’s time, and that’s one of the advantages. And we strictly used no API calls, although I did have an API originally for Salesforce which I credited API App because it wasn’t a Sandbox version. And when the Sandbox was available, then we migrated away from that API and just used a CRM Sandbox.

Jeff: Perfect, awesome. Hey thanks, Howard. Honestly, thank you very much for coming up. I really appreciate the help.

Howard: Thank you.

Jeff: Howard’s been great, not only for pushing our things to the limit. He mentioned that he was clicking like 50 actions inside of a designer and I don’t know if you’ve seen us do a demo that had 50 actions. But the thing I really appreciated about Howard, and there’s a number of you in the room who fit the same bill, and I would have you stand up if I had the time before lunch. But that he was very open and he’d send us emails, and he’s like, “By the way, I found a little hiccup once I hit action number 49 and here’s some things,” and a lot of the things he mentioned, we’re able to prioritize on our backlog and start working on those bugs because we’re getting ready to hit general availability.

And as he mentioned, being able to do that scenario, talking and synchronizing between two SaaS services of Salesforce and CRM with a number of those different complex record types being able to do it through Logic Apps. So, we want to encourage you to do the same. Now again, I kind of already hit this, you might come to these sessions and you’re like, “Man, these demos are so cool, but how do I manage these resources and make sure that I have the right pipelines and processes so that I’m not just having a demo vehicle with my partners and customers, but that I actually have a system that I can push into production and will add value and not pain to my life?”

So, I kind of want to talk about that for a little bit. The first is that Logic Apps is 100% and amazingly extensible. Kevin hit into this a lot where you can write your own actions and triggers in a number of different ways. So, we have a concept of just a basic action which is like, I send a request to something and I get a response back, just like, “Do you have something?” “Yes,” right? So, there’s a number of ways you can do that, one is through an Azure API, which can be discovered automatically through the designer. We talked about how you could use Swagger if it’s outside of Azure or anywhere really. Azure Functions is a huge value add.

Oftentimes where I see things leave the happy path is like okay, “I’m pulling these files off of FTP, but it’s not just pretty XML. It’s some weird flat file format with XML nested inside of it. How do I pull this XML out and pull these properties out as I need?” And you’re able to just quickly spin up an Azure function, do something like a regular expression, pull out that XML and return it back into the Logic App in like two lines of JavaScript, which just adds so much extensibility for some of those quick data manipulation and data operations, that Azure Functions is great at with serverless compute in a consumption-based model. Nested workflows are also very useful and we hit on Swagger.

Now, there are two patterns that are worth mentioning which is what happens if I also have an operation or a task that takes a long amount of time? Maybe I’m talking to a system that I know is gonna take a long amount of time, or there are certain steps in the middle like human intervention, where I have to pause an integration or a workflow and wait for an event to occur. The trouble that a lot of people find is that Logic Apps by default will timeout a request after a minute. Because after a minute, it has no idea, are you still alive, right? Are you still there on the other side or have you dropped dead? So, for right now the default is one minute. And whatever the limit was, there’s still these simple patterns that you can implement to make this possible.

So, the first pattern is the polling pattern. My wife didn’t come with me on this trip, but it was just Mother’s Day in the United States on Sunday, so I feel bad saying this, but I think of this pattern as the nag pattern. There’s few things in life that I hate more than weeding. I hate to weed, I really do. And it’s something I have to do. Honestly, one of the only things I hate more in weeding is working with the Eclipse IDE, but whatever. But my wife, she uses the polling method to get me to weed, which is every single day throughout the week, she’ll say, “Hey, have you weeded the garden yet?” And I say, “No.” And so then an hour later, “Hey, have you weeded the garden yet?” And I say, “No.” And then an hour later, she says, “Hey, have you weeded the garden?” And this continues and continues until finally one of us breaks down and I end up weeding the garden.

So, you can implement the same pattern with Logic Apps. We call it polling, which is the engine will come to the end-point that you’ve set up or configured, and you can also honestly just do all of this through API management as well, if your API doesn’t support this natively. And it can say, “Hey, do you have data ready for me?” And it will say, “no,” if it doesn’t. And all it needs to pass back are two things, one is the location header, which says, “Hey, here’s where you should check with me next time to see if it’s ready,” and usually we’ll throw something like an ID in the query string so that you can check the status on the check. And then the second one which is actually optional is a retry-after header, which is the number of seconds that the engine should wait before it nags you again.

The second pattern that’s available is the webhook pattern. Both of these patterns work for triggers and actions. So, a trigger could be a polling trigger or a webhook trigger. You can also have actions that implement the same pattern and both of them will kind of cause something to wait for an event. The webhook pattern is nice in that instead of coming up and saying, “Hey, have you done this yet?” and it says, “no,” and continuing to check every 15 seconds, the engine actually goes to the service and it says, “Hey, I need you to do work, and as soon as you have it done, here’s the URL that you need to notify me at.” So, it’s a push model, then your API can do whatever it needs to do and whenever it’s done, it can send that push request to the Logic App engine to say, “Hey, I got my data, here’s the data. You can go ahead and continue.”

So, let me check time here. I want to see how much I can show of this. I’ll show you just quickly a visual studio project I have. I actually created a custom API this week, pieces of it I was gonna show you so you can actually use custom APIs to active SOAP clients, to wrap SOAP clients if you need. So, here’s one where I created a SOAP client to do this. But the pattern I actually want to highlight here are those two, the polling pattern and the webhook pattern. I actually feel bad showing this code. I sat next to our Dev Manager and he looked at this and he’s like, “Wow, this is the worst thread management I’ve ever seen, and that if I get a hundred requests on this end-point, I’m gonna get a hundred threads, a million requests, a million threads,” but it illustrates the point, okay?

So, you’ll see in this polling one, what’s happening is I get the request and I’m gonna start that task. And in this case, I’m just starting a thread but ideally, I’d be doing better management of that, and it’s gonna sleep for 18 seconds. Now, remember that number, 18 seconds, it actually comes in handy. And as soon as I start that thread then I send an asynchronous response back with that 202. I’m gonna send back a GUID to check back in 15 seconds. And now what happens is I actually have the second method right here which is the check status, which gets that GUID and it will say, “Hey, is that thread still alive?” And if it’s finished, if it’s done, then I just send back whatever data I need to send back. And if it’s not yet done, then I do the no check back in another 15 seconds, okay? So, it’s gonna come, it’s gonna ask me, I send back a thing and say wait for 15 seconds and it comes and asks me again.

Now, the webhook model by the way, all these samples that I’m showing today are all posted in our GitHub, which I’ll have a link for later, and we also have documentation for both of these patterns. The webhook model is similar in that I get the request and I immediately start up my work. But the difference here is that after I’m done with the work, I call that callback URL that the Logic App passed me, to let it know that I’m done, and then I pass back whatever I did or I need to which in this case is null. So let me actually show you, I think I did this, yeah. So this is what it looks like when I added this Logic App…when I added both of these methods into a Logic App, and if you notice the execution time, it’s worth calling out.

The polling one, which is the third one down, took 31 seconds to complete, and that’s because it went and asked it at 15 seconds, well I guess first it gave it to it and it said check back in 15 seconds. At 15 seconds it said, “Are you done yet?” It said no. And then it had to wait until 30 seconds to find out that it was done. Whereas the webhook model at 19 seconds, as soon as that 18 seconds was done, the API was able to push that model and the workflow was able to continue. So, hopefully, that illustrates kind of the difference between those two models. Neither one is right or wrong. Both work.

One requires us to go check every 15 seconds, the other one subscribe. From a trigger standpoint, we as a team are working especially with services that support webhooks to make it so that all our webhooks are push notified, so that as soon as something changes on the system, that we know. A number of services are adopting this model like GitHub, PayPal, Stripe, Visual Studio Online. One of the Logic Apps I created recently was a request trigger that talks to Visual Studio Online, and when a build is complete, it will send that webhook request to the Logic App to let it know. So the instant that that change happened on an external system, I’m able to know.

All right. So, let’s go ahead and talk about management deployment and release management, and I don’t even know. okay. So, Howard mentioned that he used Azure Resource Explore, and if you’re not familiar, one of the honestly amazing things about Azure, is that every single resource in Azure can be defined from a template, which is a JSON object, which explains the details and the parameters and the metadating config about that resource. The other benefit to this is that you can also call and create any resources through an API. So, if you do an API call into Azure or use the PowerShell tools, you can say, “Hey, here’s the template of all the resources that I want you to create,” and the Azure resource manager engine will go and deploy all those resources for you.

So, I’m gonna go ahead, let’s actually just take, let’s just add one more step in this Logic App that I was creating at the beginning because I wanted to have a good Logic App to demo some of the stuff I’m creating a template with. I’m actually gonna send a message to a Service Bus topic after I add the customer to a row, and let’s just do this really quickly. Maybe, I have like an enterprise Service Bus scenario where I want to send this to a topic that other Logic Apps or systems could pick up on to say, “Hey, I see that we had something you added.” And let’s add a content type.

Okay. So, let me show you really quickly. I’m gonna go ahead and switch over here to the resource explorer which is a great tool if you want to dissect and understand, what is the template, what is the actual raw guts of this resource look like from an API standpoint of Azure? So, I’m gonna go ahead and come here to my subscription that I just created this Logic App in. What’s nice is creating these Logic Apps all real time is you know that there’s no magic in this. This is also very scary from my standpoint of since there’s no magic, I hope nothing breaks.

Okay, so this is an example, let me grow this a little bit. This is an example of what a Logic App looks like, okay? So, I have some basic properties like the provisioning state succeeded. I created it during this date. I changed it last time this date. I’m just gonna do some highlights. The first one is a Logic App resource has a definition, and this is the part that you’re used to seeing in code view. This is letting us know how we should execute the different steps, so your definition includes your triggers, and your definition includes your actions.

Now, one of our goals as a team honestly, is that you don’t ever have to work with this template if you don’t want to. That’s why we invest so heavily in a designer. But it is worth understanding how it’s constructed so that as you’re dealing with deployments in those steps that you understand how the pieces go. Now, the one part that’s worth highlighting because this part actually isn’t surfaced well right now with the designer, that we need work to do, is this part at the bottom where you have parameters for your Logic App. Now, Kevin mentioned this before where in our new model to call APIs, you no longer have to deploy that micro-service in your subscription, but all you need is this connection resource which has meta-data about the connection.

So, if I’m connecting to a SQL database, that connection will have information like my connection string. If I’m talking to a Service Bus topic, that connection will have information about the topic I’m talking to in a connection string. OLAP connections will have the OLAP Tokens and so on. So, at the bottom of my definition here you’ll see that I have two connections, a Service Bus connection and a SQL connection, and these are the IDs to that resource in Azure. Now, if you go and look in Azure Logic App’s blog, I actually have a nice blog that talks about it’s a fantastic blog. It should be enough anyone needs.

But I have a blog that talks about you can take this raw definition from like the resource manager and you can create this as a template where you can take out some of these IDs and say, “Okay, I actually need to make this a parameter so that in my different environments, I’m gonna use different connections.” And I could invite probably someone on stage who even has an experience with and I’ll say, “Okay, here’s the Logic App we just created, I want you to make this a template that you can deploy and manage and check into source control,” and we would probably spend the remainder of this session and most of lunch just doing that.

And I wanted to show you how this is possible today. So, I actually spent some time this last weekend writing a quick PowerShell script that I’m going to deploy to GitHub later today, that I’m gonna share with all of you to start using and we’re working with the Azure Resources team to make sure that you can export templates as part of their preview functionality that’s coming in the coming weeks. But short term, I wanted you to have this now. So, what this does is this is a PowerShell extension where I have this function called Get Logic App Template.

You just give it the name of your Logic App, the resource group that it’s in, and the subscription ID. And in this case, I’m gonna export it to a JSON file. And so here’s the definition I’m actually gonna call into is this one we’re looking at on screen, so you’ll notice like here’s my connections and these are all hard-coded IDs. I’m gonna go ahead and run the script and cross my fingers that this works because this really was a last minute project. It’s gonna want my authorization to make sure that it can go and grab these resources.

And what it’s actually doing as the script runs, is it’s gonna analyze my definition, see the connections that I’m using, going and figuring out what’s the template for this connection, what are the parameters that it needs, and generate a template so that I can deploy it to any environment. Just like that, it’s done, so let’s open it. Oh please, work. If this works, I think it’s gonna need some applause because I’m nervous right now. Let’s see what happens. Okay, I come in here to desktop into Integrate, into Build, into Template, let’s see, and I’m getting a little notification here, awesome. Okay, it worked.

Okay. So, here now is the templatized version of this Logic App that I just made on stage. You see here it’s created parameters for everything, like the service plan name, the Logic App name, the Service Bus connection string, the SQL connection string. I still have my definition as is, but you’ll notice a lot of these things have become parameters. So, if I come here, especially at the bottom, my connections objects, now instead of having those hard coated connections, it’s saying at deploy time, whatever connection I created as part of a deployment, that’s the connection I want you to reference. So, honestly, that took me what, like three seconds to run that script. Let’s take this further, okay?

Now, I have a template that I can deploy but that’s still not enough. I need to manage this. Here’s a Visual Studio project I have. I’m gonna go ahead and actually I’m just gonna do this real time because I’m trusting and so far, I’ve had a good luck. Worst case scenario, I’ll just have to use an old commit I’m gonna go ahead and upload this template file that I’ve just generated. Once you have Visual Studio integration, this will be even easier because I could just check this directly in. But what I’ve actually done is, part of Visual Studio is they have release management processes. So if I come over here to release, I can quickly create using an Azure Resource template, a full release pipeline for my Logic App.

So, in this case, you’ll notice, I have the Logic App release management that I created this week. And if I edit this…I hope you can see this okay. I’ll zoom in, some of these elements don’t show up as nicely as they should. As this loads what this allows me to do is to configure different environments for my release so I can have a Dev environment, a production environment, a test environment or whatever I need, and you’ll notice here the three that I configured were Dev, test, and prod And all I’m doing is in the config for this, I’m saying, “Hey, I want to use this Azure subscription,” which you can configure through Visual Studio. “I want to create this resource group.” I can choose the resource group I want to create this in in that subscription, in this case, it’s my Dev resource group.

And now I can tell the template to use. And in this case, I’m gonna reference that template that I just checked into source control, again as part of your build process, you could be generating this template. You can also pass it a parameters file or you can define your parameters in line, so I actually set my connection string in a parameters file beforehand. But you’ll notice I can even say like the service plan name I want to be Dev service plan and the Logic App name, I want it to be Dev Logic App. And I’ll just quickly show you if I come back to this template, you’ll notice here at the top, these are the parameters that it’s gonna be setting, are things like the Logic App name, the service plan name and so on.

So, what makes this great now is now I have this deployment process and I can set things like, “Hey, at each different step, I need someone to approve the deployment.” And so let’s come here and in this case I’ve set myself as the approver because I should be the approver for all deployments. Now let’s go ahead and add a release and let’s call it Live Demo. Please work, because I just generated this template. I can choose the commit that I want to build this with. And I’m gonna deploy it to Dev, wait for a successful deployment and then deploy it to test, and then deploy it to prod, and I have three resource groups set.

So, I’m gonna go ahead and set create, and that’s gonna start a new release. It just said release seven. And so you’ll see now that it’s queued up this resource deployment. It’s gonna start this because it’s gonna be the first thing on the queue, so very shortly you’ll see this will switch to in progress. It’s gonna take that template that we created, insert the parameters that I set, so I set that SQL connection string. I set that Service Bus topic information, and I set the Logic App names. And it’s gonna deploy that into the resource group that I configured which is my Dev resource group.

So, as soon as this is done, I’ll be able to open that resource group, open the Logic App and begin testing it end to end. And Visual Studio release management has these nice things like test so if I wanted to have automated tests, I could build on top of those as well. Unfortunately, I did not have the time to do that so far. But very quickly, we’ll be able to say that this deployment is gonna be finish. We’ll be able to open that resource group in Azure and once we approve this, it’s gonna send me an e-mail. Once I click approve, it can start the next release process of going into test and into production.

So, what we really want to do is we want to make sure that we’re putting the tools in your hands so that you can go for making these cool Logic Apps, into creating templates for them, having those templates checked into source control. Using Visual Studio so you’re not doing any copy-pasting like I had to do in this demo, but everything is in the one place that you need and you can use services like Visual Studio to have release management as needed.

So, let’s go ahead and check in these logs. It’s starting release. It’s doing the Azure deployment and it seems to just be sitting in that step right now. That’s okay. Let’s see. I’m confident it’s gonna finish soon. Usually, this Azure thing doesn’t take too long. So, while that’s deploying, let’s see if I have anything else I wanted to cover. So, I mentioned that simple deployment template is parameters and resources, and for Logic App, I kind of already showed this. The four things you need for a Logic App deployment is the resource group that you’re deploying to, the service plan that it maps to, your Logic App which contains your definition, and then any connections that it’s gonna be referencing.

Now, I could, at the same time, take this same template that was generated and if I have custom other Azure resources that it’s dependent on, I could inject those directly in that template and deploy everything in one massive deployment with any custom APIs as well that I needed. So, let’s come ahead here and let’s see how this went. So, it’s succeeded, and now the moment of truth. Let’s come here back into Azure, and let’s open our Dev resource group, and we should have a Logic App here, our Dev Logic App. And this should be that exact Logic App that you just watched me create on stage. So, let’s open it. It says one trigger, two actions, and let’s see what this looks like.

It is now loading and keeping the suspense so that it gets a bigger applause. There we go. So, there’s that Logic App, the exact one I just made on stage. It’s now been deployed into my development environment. I could go and approve this deployment after I test it. It would continue to do that release management process. I did this all on stage in front of 300 people, so I think that’s pretty cool stuff. No magic, no magic. Awesome.

So, hopefully, that’s exciting too. Again, as I mentioned, this PowerShell script that I wrote for this, I’m gonna publish on GitHub. I’m not promising it’s perfect, but it’s there for now and if you’re familiar, the Azure Resource team is working on abilities for you to export entire resource groups as a template, and we’re working with them to make sure that they understand how Logic Apps work, and so that they can create a template similar to in a way that we just generated here.

So, again, reach out, reach in. The one link I added on this that wasn’t on the other ones is the link to our GitHub. This has a number of different samples, all the samples I showed today on how to do custom triggers and custom actions, and custom actions that might be long polling as well as later today, as I check this into GitHub, you could use that PowerShell to generate templates. Please follow us on our blog as we post updates there. And again, we have a Logic Apps live webcast that happens every month. Unfortunately, it’s been like five weeks right now. I’m going through a little bit of YouTube withdrawals because we’ve had some conferences, but we hope you can join us for this webcast, as well as we do monthly updates.

The last piece that I’ll mention before…did I already go over on lunch? No. Wow. I didn’t see enough people checking their watch, so I was like, “People are gonna be ticked.” Okay, so the last piece I’ll plug in since I do have a little bit of time is…what was I gonna plug? Oh yes, one of the pieces that we’re really getting much more disciplined on and I just want to…again, to the story of I want to make sure that I can trust Logic Apps, this is a piece that I neglected to mention before. One of the things that’s great about Logic Apps is that we’re continually coming out with new features and new functionality. But this also comes from the other lens where people are like, “Hold on, I don’t want to have to worry about my stuff breaking, or things stop working because you’re continually releasing and adding new functionality.”

So, there’s a few things that I just want to talk to on that note in the last five minutes. The first is that we take schema versioning and API versioning very seriously, right? So, when we release these new functions on things like control flow, and we enable you to do those things like having scopes, we’re actually gonna increment the schema version. Which means that if you have a Logic App today that you’ve built like Howard’s that he talked through that’s using the old schema version, we’re gonna continue to execute that in the exact same way that he designed it to execute.

And if you want to start using the new schema and the new capabilities, and the new pieces there, you can start doing that. But we’re not gonna do a new feature and then force upgrade on people and say, “Hey, by the way, I hope you’re up tonight because we’re doing an update and you need to know.” The same goes for our connectors with API versioning. We’re making sure that if we need to make a change to any of our APIs, you don’t have to worry about your functionality breaking and stopping working. That’s something we’re very disciplined on and setting some strict processes around that. I’ll grab you in just a second.

The other piece to it as well is on product updates, and this is one that I’m publicly humiliating myself on because we haven’t done as good as this in the last bit. But if you go to updates.azure.com, today you can look at app service updates. I actually just submitted a new update for some of the new functionality. But anytime we do a release, we’re gonna be doing an app service update so that you understand the bugs, the major bugs that were fixed and pieces like that. Yeah, Jim’s clapping, my manager, because he’s like, “Okay, come on.” But I want to make sure that all of you have…come on Jim, keep it for the one on ones.

But I want to make sure that you all have the tools and the confidence to know that I can create integrations and put these into a state that’s production ready. I don’t have to worry about things breaking, and also if there’s new features or functionality or bug fixes that might affect me, I know where to go to find that information. So, those are big pieces that we’re focusing on now. And you can go ahead, do you have a question or comment?

Man: Yeah. Will you have a deprecation plan for those schema versions?

Jeff: That’s a good question. So, while we’re in preview now…oh, yeah that’s a good point. So, the question was like, will there be a deprecation plan for schema versions? I should actually have Charles answer this. He’d probably be the best person. I’m just gonna tell Charles if I say something wrong here to hit me upside the head. I mean, eventually, while we’re in preview right now, we’re gonna do this new schema version. We’re gonna continue all the schemas here, as long as people need, especially with the volume of people that we have right now, and the ease of upgrading schemas.

Because I know, if you were part of the last schema upgrade it was more dramatic. We’re not gonna have to do that again. We’re not changing the designer. We’re not changing the connection infrastructure. It’s really like, “Hey, I want these new features. I want to opt in.” There will be eventually for this next schema thing, I would guess in like a year potentially if we still see people executing on the old schema, we’ll start reaching out to you and saying, “Hey, what can we do to get you to start using the new ones?”

Once we’re generally available though, or once we’re in general availability, which we aren’t today, I don’t actually know, Charles, how long? Two years. So, we’ll honor a schema version for two years while we’re in general availability. So, you have that confidence in knowing that at least for two years, we’re gonna do that. And again like I mentioned, we love telemetry. We love all those logs. We love reaching out to customers. A few of you might have even gotten emails from me or someone on the team saying, “Hey, we notice you’re using Logic Apps. Can we help?” So, if we noticed that two years is coming and there’s a few of the systems there still using the old schema, we’re not just gonna shut it off and say, “Hey, sorry for you,” but we’ll make sure we have a transition plan in there as well, so great question.

Awesome. Okay with that, thank you so much. Thank you so much for coming. The team’s around all day today, happy to answer questions and the rest of the week. So, thank you all.

Interested about future events?

please wait ...