Integrate 2018

June 4-6, London

Summary

Clemens Vasters presentation focuses on higher level architecture in an enterprise ecosystem. He defines two types of data exchanges; Messaging and Eventing, then dives into Eventing in more detail.

Eventing, Serverless and the Extensible Enterprise

Integrate 2018, June 4-6, etc.venues, London

Length34:40

Video Transcript

Hello, everyone. Hi. I’m Rochelle Saldanha and I work in the Client Relations and Technical Support team. And most recently, I joined the Document360 team. I’m here to introduce Clemens Vasters, who works at Microsoft as our Product Architect on the Azure product engineering team. And he’s here today to talk on Eventing Serverless and Extensible Enterprise. As you all know, he’s quite an authority on that subject. And so without further ado, let’s welcome Clemens.

Clemens: There are some seats here in the front in case of anybody being stuck in the back, there’s seats, I see here. And you can come to the front, I will not bite. In spite of me looking like I had already bitten a lot of people. So, my name’s Clemens. I’m architect on the messaging team. We do a lot of things. I’ve been here for a long time. I wrote this book. And then since then I’ve been doing kind of the Microsoft messaging stuff. Someone there needs to open the picture with a picture of an airplane, I will be happy to comply. I took that picture in Pittsburgh, two years ago. F-22 Raptor. All right. So after doing all these things in the beginning, we have a lot of services that have to do with messaging, you heard about some of them. I actually don’t have API management on here, which I probably should. And the team that I represent, that Dan represents, you will hear from right after this talk, we run these services. So we run the Service Bus, we don’t run Azure Queues, but it’s kind of the same bucket, that’s why I have it here. Event Hubs, Event Grid and the Relay. What I wanna talk about today is event driven applications and how Serverless plays into those sorts of applications, so it’s going to be an architectural talk. And I wanna talk about events, Event driven Serverless and how all those things work. And to set the stage for that, I have brought a little scenario for you.

It has to do with pictures too, so the photo that I just showed you is kind of on topic also with aircraft. So, let’s imagine a scenario. A photographer is out there shooting press photos. So you have a camera, and the camera or the phone uploads a raw picture or JPG picture into a blob container that contains uploads. That now triggers an event through Event Grid for the blobCreated event until which we hook up an Azure function. And the Azure function does have quite a few things in the beginning, it goes and classifies the picture, it runs it by the Azure cognitive services to do vision classification. And then it goes and does some extra formatting, it does some tagging and then goes and moves that picture into a picture inventory. And also puts it into an index database. And then has a push to a photo ingested notification hook so that we can go and process that. As we’ve done the ingestion into the inventory, we’re going to do a further step of triggering, again, with the blobCreated hook on the inventory to go and size those images, which means we’re going to create thumbnails, which are appropriate for previews, which are appropriate for various forms of publishing. So we’re going to go and automate that and put that into a blob container sizedimages. All of that flow so far, is simply one that is completely event driven and completely based on functions without you having to write any hosting code.

Then, there might be, as we are having the photo ingested, there may be notifications being sent to, let’s say that phone. And on the phone you’re being asked to say, “Hey, we just ingested this picture because you are the photographer who just took that photo with their camera.” You take the photo with your proper camera. You get a notification on your phone saying, “Hey, do you want to go and add metadata to that?” Or the photo gets in to Lightroom, goes into Photoshop, goes into the Newsroom application and then in the Newsroom application you decide that that photo is relevant and you go and publish an article on your website that’s using that photo. And that photo is obviously then using already some of the pre-sized images.Again, for that flow, so far, it’s all pushing notifications. You notice that there are some flows, which actually cross into client territory, right? They’re reaching into a Newsroom, they’re reaching onto a phone. So we’ll have to go and have some path for that. Now, the photo editor decides to make an update, a touch up to one of those photos or you are decided to do that from your phone as you’re out there. So you do an update. That update then causes a update on the blob container, causes some re-indexing, causes some updates to the sized images, which then will go and automatically update the published assets and then notify again the Newsroom app. That’s a pretty powerful flow you can go and realize with, purely driven by events. And just one example of those.

Let’s look at something else. Sensor driven event and building management. Well, we have two pivots on information that flows. And I’m just in the interest of time because I only have 30 minutes, we’re only going to look at one of them. And that also shows that you are doing your architecture in your organization of your events in the beginning, in topology so that you can answer interesting questions. So we’re going to go and talk about sensors in the building. We will organize them on a building management pivot. You might also go and organize them, in addition to that, on a device management pivot. But those devices are now going to emit events into the building management system and they might emit other events in a different organization into the device management system. So these are sensors in that building. Those sensors are in rooms covering gas and biohazard sensors, climate sensors, fire and smoke sensors, and occupancy sensors. Those sensors are organized in rooms. The rooms are organized in units. We are in a unit, in such a building. Units exist on floors. Floors exist in buildings. Buildings exist on campuses. Campuses are owned by some organizations. You can go and now take that taxonomy and turn that into organizational hierarchy for how you published your events. If that looks like a topic for you, that’s a fine idea.

What’s interesting is that, if you go and organize your events a priori, or the event taxonomy in that way, as you are aggregating it, as you’re asking questions of that event stream, you can start asking interesting questions. For instance, are there people in this room? What room is unoccupied in building 41? Is there a fire alarm at the site? And then you can go and react on those things. You have an analysis, and based on that analysis you can go and trigger some actions. Some of those actions may be immediate. So the operation is immediately actionable, like an alarm is immediately actionable. If there’s a fire sensor and the fire sensor triggers, you probably wanna go in take some action on that. Other things like the fluctuation of temperature in the room, well, you will take action after a while if you see the temperatures going up in a trend. So for the first question, are there people in that room? You may wanna go and subscribe to the occupancy sensor of a particular room and then basically take the last value read of that, and then decide based on this whether there are people in the room. So that’s fairly simple. What room is unoccupied in building 41? Well, you observe all those streams off the occupancy sensor and then you query across the latest value. Is there a fire alarm at the site? Well, you go and subscribe to the entire site and then you filter whether you see an alarm.

So all of those things can be driven based on events and you react to those events either immediately to something that’s immediately actionable or to an aggregate of it. Just so that you get an idea. Keep that in your head. We’ll get back to that. Service orientation is something that we’ve been talking about for 20 years now. And now all of these serverless things come around and everybody’s talking about functions, and how functions is the new craze, how does that relate to the principles of service orientation that we’ve been talking about for so long. Services, and this is what we’ve been preaching for 20 years now, are autonomous entities. The service owns all the state. It owns a communication contract. It could be changed, redeployed, and completely replaced. It has a well-known set of outside communication paths. It has no shared state with others. You don’t have two services that have the same underlying, read-write storage are one service, not two. So all of those principles have been true for 20 years, and actually, something like Azure would not exist if we didn’t have these autonomy principles because it is possible for us to deploy, run, build, services completely independent of each other. It’s not that there’s the grand coordination summit of teams every week that say, “You go and deploy this, and you and you deploy that,” because we have a set of rules that we all follow and that makes it possible that we can independently operate and run our services while they’re having dependencies on each other.

So all of that is grounded in these foundational rules. Rules of autonomy but also commitment to others. So that modern notion of service. We used to have a notion of service that was probably a little narrower when we were thinking about this in terms of artifacts that we were dealing with. You would think about the service as they are, say the word, adopt CS service. It would be hosted together and had a set of exposed functions. But really, if the evolved notion of service is that it’s really a group of things that a team owns that allows a team to go and evolve their capabilities together. So service is really about ownership. And that might combine a set of different artifacts. And if you take that together with the notion of serverless, and specifically the functions, you get to the point that you have a service that is composed of a set of functions and those functions form a fleet. And when you think about functions, they are independently deployable, independently scalable. You may have independently manageable, you may have a function or two, which take thousands of requests per second, and then you have a bunch of other functions, which are auxiliary, which take almost no traffic. And scaling them together is hard, but scaling them independently, you know, appropriate for what that particular thing does is a better and ultimately also likely cheaper and easier to manage.

And also, not everything is the same transport, right? One of the mistakes arguably of a model like .CF when it’s all bound to the same transport, even though we have these flexibilities in bindings and all those things, is that not everything is suited to talk about over HTTP at all, or to have a synchronous request response interface. So, with a notion of function, you can go and have a function that just binds to a queue, you can have a function of just binds to HTTP and you can have a function that just binds to the output of Kafka and run them all together. And then call that a service. So services is really not about the particular artifact but it’s really an organizational principle. And that question was asked to me in Athens, in Saturday, like, “How do I think about this from an organizational perspective? Doesn’t that get complicated?” And the way I think about this is, this is the stuff you would put in one repo. But it’s not necessarily the stuff that you all deploy into the same slot. But that’s how you organize your stuff. The upside of functions is that they’re independently deployable and versionable, and independently scalable and they can have independent communication paths, as I just said. They can be hooked up to a database transaction, they can be a trigger. And that’s fine. A system is made up of services. And they are composed, and the solution scope for the system, and the selection of services that are applied in the scope of a system, may be informed by law, culture, policy, a U.S. system.

These days may look a little bit different from a European system, because of GDPR. And because of many other different concerns. So the principles of modularization that we had for 30, 40 years ago, also still apply in that you are grouping your services you’re making your system modular based on those overarching non-technical concerns. That all out of the way, let’s get to messaging. Rest is great for…or HTTP or RPC are great to obtain immediate answers. And that’s why it’s so popular and it’s also how you do a function call, so it’s super popular. But there’s very many different scenarios where that’s not the appropriate communication path because your communication is not a request, but it’s something else. And the something else is, I have a few here that are something-elses that are not necessarily requests, you have reports, transfers, commands, notifications, queries, measurements, assignments, handovers. A handover is, you have done a piece of work in your part of the system and now you’re effectively bundling up the result and you handing it off to a completely other part of the system. You’re done with that work. You do a transfer, which is, you have a large file and you try to get that over to some other part. You have a job where you are assigning some duty over to another system. You throw that over there on the fence and then eventually when that’s done, you want to have a report about it. And you can go and take these and group them.

So that grouping here, is based on what the expectation is that the sender has when they send these things. Commands, transfers, queries, handovers, jobs, assignments, updates, requests, all put some sort of expectations about the next steps on the receiver, by the sender. Reports, notifications, measurements and traces are basically just reporting out facts, without having any particular expectations of what’s going to happen with them. So that’s why I distinguish between intents and facts. Intents, are very well covered by Messaging, and what they do, is they set expectations. You are having conversations, you have mutual assurances about contracts and how the communication happens, you do control transfers of a workflow to someone else. Or, in the case of banking transactions, you actually do value transfers. And in the case of, dare I say it, Bitcoin, you literally transferred the value in the record itself because the value’s inherent in that record, which I find crazy, but you might not. So these are all things that are happening with intents, that’re happening in messaging. Eventing is all about reporting out facts. You’re effectively making a statement about something that just happened and the entire scenario that I just showed you in the beginning, the photos scenario, was all about, “Hey, I did something. I stored a file.” And then the next step picked that up and say, “Hey, yeah, I know what to do with a file,” and did its own thing.

But there was never a control point, a control handover where it said, “You got to go and do this.” But these were all extensions effectively that were done on the fact that you had just done some work. So messaging is about conversations. Eventing is about telling people what just happened. That has consequences in architecture. And it has consequences in architecture so grave that we are making different services for those. Example, you have a device command, you have a device that sits somewhere out in the world. Let’s say it’s a car. The car drives through tunnels, gets parked somewhere in parking garages, it drives through a rural landscape, it’s not always in reach. So you have some queue for it. And if you want to send something to the car, you give that to the queue and then when the car shows up again on the network, it goes and fetches that stuff from the queue. And then if a backend process wants to have a conversation with the car, it puts it into the queue and then the car goes and stores that message on a queue that comes down back and then sends it to the backend process. And there’s some correlation happening because it’s inherently asynchronous. In the ideal case it’s very fast, in the case where the car drives through a tunnel, no, it is as fast as possible until the connection is reestablished. But there is little queues in the middle. IoT Hub and Service Bus cover that scenario. It’s a command that’s being issued to a device. And that command is happening over a messaging path.

Downstream, if you think about the messaging for the backend process, the backend process may employ a workflow manager, like logic apps. And then go and execute a number of steps. And then go and handoff the result to accounting and stuff. Which means that’s the integration thing, that’s why you are all here for integrating the accounting and stuff, I think. So that’s messaging. Messaging is really about heavy-duty transfer about conversations, about integration of things. Eventing is about reporting stuff out. And there’s, again, here are two different categories, discrete events and serious events. Discrete events are independent, they report some state change and they are immediately actionable. I just stored this picture in the blob, is something that is immediately actionable. A temperature reading here in that room is not immediately actionable because even if it spikes above 23 degrees or 24 or 25 degrees, it might only do so for a very short moment of time, and you do not wanna go and have the air conditioning trigger on those short little-windows-of-time events, but you want to go and measure over a minute. And then based on that, go and make a decision. So you don’t need to do this switch off and on the whole time. So those are serious events, which are looking at multiple events, you aggregate them up and then you take action based on the analytics result, which are called analyzable.

The discrete events are interesting because they are an extensibility enabler. They report independent actionable states, blob created, sales lead created. I had a conversation recently with a friend of mine who used to be a manager in four companies ago of mine, 12 years ago, who is now starting a company. They also do workflow. And they kind of do this a thing that easily morphs into a CRM system. And so he has that sales lead creative thing, that’s exactly one of his key workflows. And he has an accessibility system, that trigger space on that where he can go and do some plug-ins, so they’re exactly effectively doing that. They have events that they raise and then they allow extensibility upon those events. The great thing is, that you don’t compromise your security model with that. Because you raise events about something, you give context in that event, but you don’t give all the data with that event. If you have privileged information, like for instance, that blob, right? Not everybody can go and see that picture. But you can report the fact that you uploaded a picture. And so you’re making that fairly broadly available to a broader number of clients. But as they come around with that URL, they have to go and pass the authorization gate. So you can go and do fine grained authorization through this, you know, go here notify and then come back and get the details, you can do fine grid authorization, even though you’re kind of broadcasting these things out more broadly.

Data Series Processing, is happening by collecting the readings, putting them into ingestor. Typically you go and partition them up. That’s the thing that we do with event hubs. And then you start pulling those events towards you. And the reason why this pattern here exists, the reason why you have Kafka, why you have event hubs is because the processor is stateful. You pull the stuff towards you because you need to go and aggregate, and you need to aggregate into a bucket. That’s why that pattern exists. You put the data in, and now you have a way to control your own cursor. You can go and step back. You can step forward. You can basically use this like a tape drive. And that’s the matching analogy of it being a tape drive. And you can pull the data towards you. So for everything at stateful processing, event hubs or events, event hubs is the right choice. A thing that we did, and that Dan is going to talk more about, is we announce a build Event Hubs for Kafka ecosystems, where we are supporting on event hubs the Kafka protocol. No Kafka bits were harmed in the process. We have read the Kafka protocol specification and made a brand-new implementation of that in event hubs so you are getting the reliability of event hubs and the scale of event hubs and the effortless use of event hubs. You don’t need to know what zookeeper is, while using original Kafka client bits, which is pretty amazing.

So you can now, in addition to the existing protocols, also use Kafka. And the great thing is, you can push with Kafka and you can pull out with event hubs AMQP. You can push with AMQP and you can pull out with Kafka. So it’s a multi-protocol service. Another thing we also have is a path for event grid that we notify you about data that we have put in to blob store with Capture. And then you seek CNCF CloudEvents here. I’ll get to that in a second. Discrete events are independent, which means you don’t need to pull them to the same place. You can actually go and spread them out to independent processors, which means here you don’t need to have a pull model but you can actually have a push model. So what we’re doing is, with a distributor pattern, which we call Azure Event Grid, we’re pushing into the event grid. That’s a pop-up engine by itself, and then we can go and push out to those handlers and that’s the thing that we put into the Azure platform, and Dan is going to talk about that in more detail. What I’ll talk about briefly, because I’m involved in that standardization process directly, is CNCF CloudEvents, and that’s something that we’re doing as standardization work with Google and with IBM and with Serverless Inc., and with VMware and many others. That for the kinds of events we’re pushing with event grid, we’re now creating a multi-party standard, common metadata for events.

We’re also thinking about integrating that in the flow for all the stuff that goes through event hubs, even though we will not prescribe it. Flexibility to innovate on event semantics. We’re not going to bolt down what an event means, but we’re giving a meta-data harness for it. And that will also support multiple transport options. HTTP support in Webhooks and MQTT are checked in. AMQP support will be checked in this week, actually. And the HTTP support is 0.1. We released the 0.1 one support in the product at Kukan EU, in Denmark, four or five weeks ago. And so you can use CNCF CloudEvents today as a format. And it’s something that we will go and push further. And the great thing is that there are so many vendors who have committed support for it, that we believe this will be the way how events, certainly the discrete events, are going to be expressed across very many cloud platforms. And that is, we do that to make life easier for you so you don’t have to handle events in a different way in different platforms. So, effectively as a summary, the way we’re thinking about this is, event driven, you have core functions. You have your core application. And your core application is starting to write out events about things that it does. It creates a new sales record. And now you can go and start building applications that are reactive to it.

And as we think about digitalization, as we think about digital functions marching more and more into the core of the business, that’s really the work that’s being done, it’s like reacting to things that are happening in your world and even extensions can go and send more events. And digitalization, and I think event driven, is a pathway towards automating more and more work, but also being able to involve knowledge workers very smartly in this because ultimately you can take any and every of those events and can turn them into an e-mail. And involve people in the process. Then the question is, how do we extend it to the edge? And that’s the last thing I wanna show you and I wanna do one little demo. And that is, I said, “Hey, you want to go and push one of those events to a phone or you want to go and push one of those events back to the restaurant that’s on premises? How are you going to do this?” And we did a new little trick. And for some of you, the veterans, that’s going to be an old little trick. I have… Let me go and stop this. I have a little app. I will show you that app in a moment. So here’s what this is. I’m starting a listener.js app in node. That’s really unexciting, is it? Okay. And…oh, God. Like this. And then I’m going to go and call that from here. If the demo-gods align, which they usually don’t. People are doing a lot of things on the network, I guess. Let me try that again. And then let me try that again with a different browser. And then let me give up. Just because of time.

Oh, that’s why. It was working. It was just not printing. Okay. So, yeah, that was pretty dumb. Okay. So, this is the oldest sample that we’re showing, actually ever, because this is based on the relay. And the relay had its 12th birthday on last Thursday. And we taught that relay a new trick. What I just showed you is the right side of this picture. The left side is a regular old node js listener. And if you now go and instead of requiring the HTTP package or module, a node, you require the hyko-https module from node. You get the exact same API for HTTP listeners. But that runs through the Azure relay. What that allows you, and Dan is going to show, I think another trick of this sort, what that allows you is to host a relay HTTP listener anywhere. So you can go and run that thing on premises, in a gym, in a tax adviser office, in a law firm, in a bank branch, and you can expose surfaces right in the app using that technique. You don’t need to have a VPN. It’s not as dangerous as a VPN because with VPN you actually expose Layer 2. And here you only do with Layer 7. We’re giving you that support. I’ll just show you that because of time, but we also publish an asp.net core extension for hosting so you can take an entire asp.net core web service and say, “Use as a relay.” And then you will go and host that thing on the Azure relay.

So that’s the successor effectively of the .cf support for HTTP that we used to have, and we just released that, and you’re going to hear more about the relay in the near future for those integration scenarios. So, in summary, autonomous services, also true. Functions are a way to implement those services in independently deployable ways. A platform-as-a-service gives you various ways of hosting these things. Messaging and Eventings, as the communication backplanes, is tying it all together. And that’s why we’re building all these things, and why we have so many carrier pigeons. And I hope you found that useful, and Stan is going to talk to you about the details after the break.

Download Presentations

Fill the form below to get all the presentations delivered as a single zip file in your mailbox.

please wait …..

Interested about future events?

please wait ...