Skip to content

Write to Delta using Merge

SparkDeltaMergeDestination

Bases: DestinationInterface

The Spark Delta Merge Destination is used to merge data into a Delta table. Refer to this documentation for more information about Delta Merge.

Parameters:

Name Type Description Default
data DataFrame

Dataframe to be merged into a Delta Table

required
destination str

Either the name of the Hive Metastore or Unity Catalog Delta Table or the path to the Delta table

required
options dict

Options that can be specified for a Delta Table read operation (See Attributes table below). Further information on the options is available for batch and streaming.

required
merge_condition str

Condition for matching records between dataframe and delta table. Reference Dataframe columns as source and Delta Table columns as target. For example source.id = target.id.

required
when_matched_update_list list[DeltaMergeConditionValues]

Conditions(optional) and values to be used when updating rows that match the merge_condition. Specify * for Values if all columns from Dataframe should be inserted.

None
when_matched_delete_list list[DeltaMergeCondition]

Conditions(optional) to be used when deleting rows that match the merge_condition.

None
when_not_matched_insert_list list[DeltaMergeConditionValues]

Conditions(optional) and values to be used when inserting rows that do not match the merge_condition. Specify * for Values if all columns from Dataframe should be inserted.

None
when_not_matched_by_source_update_list list[DeltaMergeConditionValues]

Conditions(optional) and values to be used when updating rows that do not match the merge_condition.

None
when_not_matched_by_source_delete_list list[DeltaMergeCondition]

Conditions(optional) to be used when deleting rows that do not match the merge_condition.

None
try_broadcast_join bool

Attempts to perform a broadcast join in the merge which can leverage data skipping using partition pruning and file pruning automatically. Can fail if dataframe being merged is large and therefore more suitable for streaming merges than batch merges

False
trigger str

Frequency of the write operation. Specify "availableNow" to execute a trigger once, otherwise specify a time period such as "30 seconds", "5 minutes"

'10 seconds'
query_name str

Unique name for the query in associated SparkSession

'DeltaMergeDestination'

Attributes:

Name Type Description
checkpointLocation str

Path to checkpoint files. (Streaming)

Source code in src/sdk/python/rtdip_sdk/pipelines/destinations/spark/delta_merge.py
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
class SparkDeltaMergeDestination(DestinationInterface):
    """
    The Spark Delta Merge Destination is used to merge data into a Delta table. Refer to this [documentation](https://docs.delta.io/latest/delta-update.html#upsert-into-a-table-using-merge&language-python) for more information about Delta Merge.

    Args:
        data (DataFrame): Dataframe to be merged into a Delta Table
        destination (str): Either the name of the Hive Metastore or Unity Catalog Delta Table **or** the path to the Delta table
        options (dict): Options that can be specified for a Delta Table read operation (See Attributes table below). Further information on the options is available for [batch](https://docs.delta.io/latest/delta-batch.html#write-to-a-table){ target="_blank" } and [streaming](https://docs.delta.io/latest/delta-streaming.html#delta-table-as-a-sink){ target="_blank" }.
        merge_condition (str): Condition for matching records between dataframe and delta table. Reference Dataframe columns as `source` and Delta Table columns as `target`. For example `source.id = target.id`.
        when_matched_update_list (list[DeltaMergeConditionValues]): Conditions(optional) and values to be used when updating rows that match the `merge_condition`. Specify `*` for Values if all columns from Dataframe should be inserted.
        when_matched_delete_list (list[DeltaMergeCondition]): Conditions(optional) to be used when deleting rows that match the `merge_condition`.
        when_not_matched_insert_list (list[DeltaMergeConditionValues]): Conditions(optional) and values to be used when inserting rows that do not match the `merge_condition`. Specify `*` for Values if all columns from Dataframe should be inserted.
        when_not_matched_by_source_update_list (list[DeltaMergeConditionValues]): Conditions(optional) and values to be used when updating rows that do not match the `merge_condition`.
        when_not_matched_by_source_delete_list (list[DeltaMergeCondition]): Conditions(optional) to be used when deleting rows that do not match the `merge_condition`.
        try_broadcast_join (bool): Attempts to perform a broadcast join in the merge which can leverage data skipping using partition pruning and file pruning automatically. Can fail if dataframe being merged is large and therefore more suitable for streaming merges than batch merges
        trigger (str): Frequency of the write operation. Specify "availableNow" to execute a trigger once, otherwise specify a time period such as "30 seconds", "5 minutes"
        query_name (str): Unique name for the query in associated SparkSession

    Attributes:
        checkpointLocation (str): Path to checkpoint files. (Streaming)
    """

    spark: SparkSession
    data: DataFrame
    destination: str
    options: dict
    merge_condition: str
    when_matched_update_list: List[DeltaMergeConditionValues]
    when_matched_delete_list: List[DeltaMergeCondition]
    when_not_matched_insert_list: List[DeltaMergeConditionValues]
    when_not_matched_by_source_update_list: List[DeltaMergeConditionValues]
    when_not_matched_by_source_delete_list: List[DeltaMergeCondition]
    try_broadcast_join: bool
    trigger: str
    query_name: str

    def __init__(
        self,
        spark: SparkSession,
        data: DataFrame,
        destination: str,
        options: dict,
        merge_condition: str,
        when_matched_update_list: List[DeltaMergeConditionValues] = None,
        when_matched_delete_list: List[DeltaMergeCondition] = None,
        when_not_matched_insert_list: List[DeltaMergeConditionValues] = None,
        when_not_matched_by_source_update_list: List[DeltaMergeConditionValues] = None,
        when_not_matched_by_source_delete_list: List[DeltaMergeCondition] = None,
        try_broadcast_join: bool = False,
        trigger="10 seconds",
        query_name: str = "DeltaMergeDestination",
    ) -> None:
        self.spark = spark
        self.data = data
        self.destination = destination
        self.options = options
        self.merge_condition = merge_condition
        self.when_matched_update_list = (
            [] if when_matched_update_list is None else when_matched_update_list
        )
        self.when_matched_delete_list = (
            [] if when_matched_delete_list is None else when_matched_delete_list
        )
        self.when_not_matched_insert_list = (
            [] if when_not_matched_insert_list is None else when_not_matched_insert_list
        )
        if (
            isinstance(when_not_matched_by_source_update_list, list)
            and len(when_not_matched_by_source_update_list) > 0
        ):
            _package_version_meets_minimum("delta-spark", "2.3.0")
        self.when_not_matched_by_source_update_list = (
            []
            if when_not_matched_by_source_update_list is None
            else when_not_matched_by_source_update_list
        )
        if (
            isinstance(when_not_matched_by_source_delete_list, list)
            and len(when_not_matched_by_source_delete_list) > 0
        ):
            _package_version_meets_minimum("delta-spark", "2.3.0")
        self.when_not_matched_by_source_delete_list = (
            []
            if when_not_matched_by_source_delete_list is None
            else when_not_matched_by_source_delete_list
        )
        self.try_broadcast_join = try_broadcast_join
        self.trigger = trigger
        self.query_name = query_name

    @staticmethod
    def system_type():
        """
        Attributes:
            SystemType (Environment): Requires PYSPARK
        """
        return SystemType.PYSPARK

    @staticmethod
    def libraries():
        libraries = Libraries()
        libraries.add_maven_library(get_default_package("spark_delta_core"))
        return libraries

    @staticmethod
    def settings() -> dict:
        return {
            "spark.sql.extensions": "io.delta.sql.DeltaSparkSessionExtension",
            "spark.sql.catalog.spark_catalog": "org.apache.spark.sql.delta.catalog.DeltaCatalog",
            "spark.databricks.delta.schema.autoMerge.enabled": "true",
        }

    def pre_write_validation(self):
        return True

    def post_write_validation(self):
        return True

    def _delta_merge_builder(
        self, df: DataFrame, try_broadcast_join: bool
    ) -> DeltaMergeBuilder:
        if "/" in self.destination:
            delta_table = DeltaTable.forPath(self.spark, self.destination)
        else:
            delta_table = DeltaTable.forName(self.spark, self.destination)

        if try_broadcast_join == True:
            delta_merge_builder = delta_table.alias("target").merge(
                source=broadcast(df).alias("source"), condition=self.merge_condition
            )
        else:
            delta_merge_builder = delta_table.alias("target").merge(
                source=df.alias("source"), condition=self.merge_condition
            )

        for when_matched_update in self.when_matched_update_list:
            if when_matched_update.values == "*":
                delta_merge_builder = delta_merge_builder.whenMatchedUpdateAll(
                    condition=when_matched_update.condition,
                )
            else:
                delta_merge_builder = delta_merge_builder.whenMatchedUpdate(
                    condition=when_matched_update.condition,
                    set=when_matched_update.values,
                )

        for when_matched_delete in self.when_matched_delete_list:
            delta_merge_builder = delta_merge_builder.whenMatchedDelete(
                condition=when_matched_delete.condition,
            )

        for when_not_matched_insert in self.when_not_matched_insert_list:
            if when_not_matched_insert.values == "*":
                delta_merge_builder = delta_merge_builder.whenNotMatchedInsertAll(
                    condition=when_not_matched_insert.condition,
                )
            else:
                delta_merge_builder = delta_merge_builder.whenNotMatchedInsert(
                    condition=when_not_matched_insert.condition,
                    values=when_not_matched_insert.values,
                )

        for (
            when_not_matched_by_source_update
        ) in self.when_not_matched_by_source_update_list:
            delta_merge_builder = delta_merge_builder.whenNotMatchedBySourceUpdate(
                condition=when_not_matched_by_source_update.condition,
                set=when_not_matched_by_source_update.values,
            )

        for (
            when_not_matched_by_source_delete
        ) in self.when_not_matched_by_source_delete_list:
            delta_merge_builder = delta_merge_builder.whenNotMatchedBySourceDelete(
                condition=when_not_matched_by_source_delete.condition,
            )

        return delta_merge_builder

    def _stream_merge_micro_batch(
        self, micro_batch_df: DataFrame, epoch_id=None
    ):  # NOSONAR
        micro_batch_df.persist()

        retry_delta_merge = False

        if self.try_broadcast_join == True:
            try:
                delta_merge = self._delta_merge_builder(
                    micro_batch_df, self.try_broadcast_join
                )
                delta_merge.execute()
            except Exception as e:
                if "SparkOutOfMemoryError" in str(e):
                    retry_delta_merge = True
                else:
                    raise e

        if self.try_broadcast_join == False or retry_delta_merge == True:
            delta_merge = self._delta_merge_builder(micro_batch_df, False)
            delta_merge.execute()

        micro_batch_df.unpersist()

    def write_batch(self):
        """
        Merges batch data into a Delta Table.
        """
        try:
            delta_merge = self._delta_merge_builder(self.data, self.try_broadcast_join)
            return delta_merge.execute()

        except Py4JJavaError as e:
            logging.exception(e.errmsg)
            raise e
        except Exception as e:
            logging.exception(str(e))
            raise e

    def write_stream(self):
        """
        Merges streaming data to Delta using foreachBatch
        """
        TRIGGER_OPTION = (
            {"availableNow": True}
            if self.trigger == "availableNow"
            else {"processingTime": self.trigger}
        )
        try:
            query = (
                self.data.writeStream.trigger(**TRIGGER_OPTION)
                .format("delta")
                .foreachBatch(self._stream_merge_micro_batch)
                .queryName(self.query_name)
                .outputMode("update")
                .options(**self.options)
                .start()
            )

            while query.isActive:
                if query.lastProgress:
                    logging.info(query.lastProgress)
                time.sleep(10)

        except Py4JJavaError as e:
            logging.exception(e.errmsg)
            raise e
        except Exception as e:
            logging.exception(str(e))
            raise e

system_type() staticmethod

Attributes:

Name Type Description
SystemType Environment

Requires PYSPARK

Source code in src/sdk/python/rtdip_sdk/pipelines/destinations/spark/delta_merge.py
129
130
131
132
133
134
135
@staticmethod
def system_type():
    """
    Attributes:
        SystemType (Environment): Requires PYSPARK
    """
    return SystemType.PYSPARK

write_batch()

Merges batch data into a Delta Table.

Source code in src/sdk/python/rtdip_sdk/pipelines/destinations/spark/delta_merge.py
243
244
245
246
247
248
249
250
251
252
253
254
255
256
def write_batch(self):
    """
    Merges batch data into a Delta Table.
    """
    try:
        delta_merge = self._delta_merge_builder(self.data, self.try_broadcast_join)
        return delta_merge.execute()

    except Py4JJavaError as e:
        logging.exception(e.errmsg)
        raise e
    except Exception as e:
        logging.exception(str(e))
        raise e

write_stream()

Merges streaming data to Delta using foreachBatch

Source code in src/sdk/python/rtdip_sdk/pipelines/destinations/spark/delta_merge.py
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
def write_stream(self):
    """
    Merges streaming data to Delta using foreachBatch
    """
    TRIGGER_OPTION = (
        {"availableNow": True}
        if self.trigger == "availableNow"
        else {"processingTime": self.trigger}
    )
    try:
        query = (
            self.data.writeStream.trigger(**TRIGGER_OPTION)
            .format("delta")
            .foreachBatch(self._stream_merge_micro_batch)
            .queryName(self.query_name)
            .outputMode("update")
            .options(**self.options)
            .start()
        )

        while query.isActive:
            if query.lastProgress:
                logging.info(query.lastProgress)
            time.sleep(10)

    except Py4JJavaError as e:
        logging.exception(e.errmsg)
        raise e
    except Exception as e:
        logging.exception(str(e))
        raise e