The content below is taken from the original ( Service Fabric Processor in public preview), to continue reading please visit the site. Remember to respect the Author & Copyright.
Microsoft clients for Azure Event Hubs have always had two levels of abstraction. There is the low-level client, which includes event sender and receiver classes which allow for maximum control by the application, but also force the application to understand the configuration of the Event Hub and maintain an event receiver connected to each partition. Built on top of that low-level client is a higher-level library, Event Processor Host, which hides most of those details for the receiving side. Event Processor Host automatically distributes ownership of Event Hub partitions across multiple host instances and delivers events to a processing method provided by the application.
Service Fabric is another Microsoft-provided library, which is a generalized framework for dividing an application into shards and distributing those shards across multiple compute nodes. Many customers are using Service Fabric for their applications, and some of those applications need to receive events from an Event Hub. It is possible to use Event Processor Host within a Service Fabric application, but it is also inelegant and redundant. The combination means that there are two separate layers attempting to distribute load across nodes, and neither one is aware of the other. It also introduces a dependency on Azure Storage, which is the method that Event Processor Host instances use to coordinate partition ownership, and the associated costs.
Service Fabric Processor is a new library for consuming events from an Event Hub that is directly integrated with Service Fabric, it uses Service Fabric’s facilities for managing partitions, reliable storage, and for more sophisticated load balancing. At the same time it provides a simple programming interface that will be familiar to anyone who has worked with Event Processor Host. The only specific requirement that Service Fabric Processor imposes is that the Service Fabric application in which it runs must have the same number of partitions as the Event Hub from which it consumes. This allows a simple one on one mapping of Event Hub partitions to application partitions, and lets Service Fabric distribute the load most effectively.
Service Fabric Processor is currently in preview and available on NuGet at the “Microsoft.Azure.EventHubs.ServiceFabricProcessor” web page. The source code is on GitHub in our .NET Event Hubs client repository. You can also find a sample application available on GitHub.
From the developer’s point of view, there are two major pieces to creating an application using Service Fabric Processor. The first piece is creating a class that implements the IEventProcessor interface. IEventProcessor specifies methods that are called when processing is starting up for a partition (OpenAsync), when processing is shutting down (CloseAsync), for handling notifications when an error has occurred (ProcessErrorAsync), and for processing events as they come in (ProcessEventsAsync). The last one is where the application’s business logic goes and is the key part of most applications.
The second piece is integrating with Service Fabric by adding code to the application’s RunAsync method, which is called by Service Fabric to run the application’s functionality. The basic steps are:
-
Create an instance of EventProcessorOptions and set any options desired.
-
Create an instance of the IEventProcessor implementation. This is the instance that will be used to process events for this partition.
-
Create an instance of ServiceFabricProcessor, passing the options and processor objects to the constructor.
-
Call RunAsync on the ServiceFabricProcessor instance, which starts the processing of events.
Next steps
For more details follow our programming guide which is available on GitHub. Did you enjoy this blog post? Don’t forget to leave your thoughts and feedback in the comment section below. You can also learn more about Event Hubs by visiting our product page.