Skip to content

Write to Eventhub

SparkEventhubDestination

Bases: DestinationInterface

This Spark destination class is used to write batch or streaming data to Eventhubs. Eventhub configurations need to be specified as options in a dictionary. Additionally, there are more optional configurations which can be found here. If using startingPosition or endingPosition make sure to check out Event Position section for more details and examples.

Parameters:

Name Type Description Default
data DataFrame

Dataframe to be written to Eventhub

required
options dict

A dictionary of Eventhub configurations (See Attributes table below). All Configuration options for Eventhubs can be found here.

required

Attributes:

Name Type Description
checkpointLocation str

Path to checkpoint files. (Streaming)

eventhubs.connectionString str

Eventhubs connection string is required to connect to the Eventhubs service. (Streaming and Batch)

eventhubs.consumerGroup str

A consumer group is a view of an entire eventhub. Consumer groups enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. (Streaming and Batch)

eventhubs.startingPosition JSON str

The starting position for your Structured Streaming job. If a specific EventPosition is not set for a partition using startingPositions, then we use the EventPosition set in startingPosition. If nothing is set in either option, we will begin consuming from the end of the partition. (Streaming and Batch)

eventhubs.endingPosition JSON str

(JSON str): The ending position of a batch query. This works the same as startingPosition. (Batch)

maxEventsPerTrigger long

Rate limit on maximum number of events processed per trigger interval. The specified total number of events will be proportionally split across partitions of different volume. (Stream)

Source code in src/sdk/python/rtdip_sdk/pipelines/destinations/spark/eventhub.py
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
class SparkEventhubDestination(DestinationInterface):
    '''
    This Spark destination class is used to write batch or streaming data to Eventhubs. Eventhub configurations need to be specified as options in a dictionary.
    Additionally, there are more optional configurations which can be found [here.](https://github.com/Azure/azure-event-hubs-spark/blob/master/docs/PySpark/structured-streaming-pyspark.md#event-hubs-configuration){ target="_blank" }
    If using startingPosition or endingPosition make sure to check out **Event Position** section for more details and examples.

    Args:
        data (DataFrame): Dataframe to be written to Eventhub
        options (dict): A dictionary of Eventhub configurations (See Attributes table below). All Configuration options for Eventhubs can be found [here.](https://github.com/Azure/azure-event-hubs-spark/blob/master/docs/PySpark/structured-streaming-pyspark.md#event-hubs-configuration){ target="_blank" }

    Attributes:
        checkpointLocation (str): Path to checkpoint files. (Streaming)
        eventhubs.connectionString (str):  Eventhubs connection string is required to connect to the Eventhubs service. (Streaming and Batch)
        eventhubs.consumerGroup (str): A consumer group is a view of an entire eventhub. Consumer groups enable multiple consuming applications to each have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. (Streaming and Batch)
        eventhubs.startingPosition (JSON str): The starting position for your Structured Streaming job. If a specific EventPosition is not set for a partition using startingPositions, then we use the EventPosition set in startingPosition. If nothing is set in either option, we will begin consuming from the end of the partition. (Streaming and Batch)
        eventhubs.endingPosition: (JSON str): The ending position of a batch query. This works the same as startingPosition. (Batch)
        maxEventsPerTrigger (long): Rate limit on maximum number of events processed per trigger interval. The specified total number of events will be proportionally split across partitions of different volume. (Stream)
    '''
    data: DataFrame
    options: dict

    def __init__(self, data: DataFrame, options: dict) -> None:
        self.data = data
        self.options = options

    @staticmethod
    def system_type():
        '''
        Attributes:
            SystemType (Environment): Requires PYSPARK
        '''             
        return SystemType.PYSPARK

    @staticmethod
    def libraries():
        spark_libraries = Libraries()
        spark_libraries.add_maven_library(DEFAULT_PACKAGES["spark_azure_eventhub"])
        return spark_libraries

    @staticmethod
    def settings() -> dict:
        return {}

    def pre_write_validation(self):
        return True

    def post_write_validation(self):
        return True

    def write_batch(self):
        '''
        Writes batch data to Eventhubs.
        '''
        try:
            return (
                self.data
                .write
                .format("eventhubs")
                .options(**self.options)
                .save()
            )

        except Py4JJavaError as e:
            logging.exception(e.errmsg)
            raise e
        except Exception as e:
            logging.exception(str(e))
            raise e

    def write_stream(self):
        '''
        Writes steaming data to Eventhubs.
        '''
        try:
            query = (
                self.data
                .writeStream
                .format("eventhubs")
                .options(**self.options)
                .start()
            )
            while query.isActive:
                if query.lastProgress:
                    logging.info(query.lastProgress)
                time.sleep(10)

        except Py4JJavaError as e:
            logging.exception(e.errmsg)
            raise e
        except Exception as e:
            logging.exception(str(e))
            raise e

system_type() staticmethod

Attributes:

Name Type Description
SystemType Environment

Requires PYSPARK

Source code in src/sdk/python/rtdip_sdk/pipelines/destinations/spark/eventhub.py
49
50
51
52
53
54
55
@staticmethod
def system_type():
    '''
    Attributes:
        SystemType (Environment): Requires PYSPARK
    '''             
    return SystemType.PYSPARK

write_batch()

Writes batch data to Eventhubs.

Source code in src/sdk/python/rtdip_sdk/pipelines/destinations/spark/eventhub.py
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
def write_batch(self):
    '''
    Writes batch data to Eventhubs.
    '''
    try:
        return (
            self.data
            .write
            .format("eventhubs")
            .options(**self.options)
            .save()
        )

    except Py4JJavaError as e:
        logging.exception(e.errmsg)
        raise e
    except Exception as e:
        logging.exception(str(e))
        raise e

write_stream()

Writes steaming data to Eventhubs.

Source code in src/sdk/python/rtdip_sdk/pipelines/destinations/spark/eventhub.py
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
def write_stream(self):
    '''
    Writes steaming data to Eventhubs.
    '''
    try:
        query = (
            self.data
            .writeStream
            .format("eventhubs")
            .options(**self.options)
            .start()
        )
        while query.isActive:
            if query.lastProgress:
                logging.info(query.lastProgress)
            time.sleep(10)

    except Py4JJavaError as e:
        logging.exception(e.errmsg)
        raise e
    except Exception as e:
        logging.exception(str(e))
        raise e