Migration Guide: Structured Streaming
- Upgrading from Structured Streaming to 3.2.2
- Upgrading from Structured Streaming 3.0 to 3.1
- Upgrading from Structured Streaming 2.4 to 3.0
Note that this migration guide describes the items specific to Structured Streaming. Many items of SQL migration can be applied when migrating Structured Streaming to higher versions. Please refer Migration Guide: SQL, Datasets and DataFrame.
Upgrading from Structured Streaming to 3.2.2
- Since Spark 3.2.2 (and 3.3), all stateful operators require hash partitioning with exact grouping keys. In previous versions, all stateful operators except stream-stream join require loose partitioning criteria which opens the possibility on correctness issue. (See SPARK-38204 for more details.) To ensure backward compatibility, we retain the old behavior with the checkpoint built from older versions.
Upgrading from Structured Streaming 3.0 to 3.1
-
In Spark 3.0 and before, for the queries that have stateful operation which can emit rows older than the current watermark plus allowed late record delay, which are “late rows” in downstream stateful operations and these rows can be discarded, Spark only prints a warning message. Since Spark 3.1, Spark will check for such queries with possible correctness issue and throw AnalysisException for it by default. For the users who understand the possible risk of correctness issue and still decide to run the query, please disable this check by setting the config
spark.sql.streaming.statefulOperator.checkCorrectness.enabled
to false. -
In Spark 3.0 and before Spark uses
KafkaConsumer
for offset fetching which could cause infinite wait in the driver. In Spark 3.1 a new configuration option addedspark.sql.streaming.kafka.useDeprecatedOffsetFetching
(default:true
) which could be set tofalse
allowing Spark to use new offset fetching mechanism usingAdminClient
. For further details please see Structured Streaming Kafka Integration.
Upgrading from Structured Streaming 2.4 to 3.0
-
In Spark 3.0, Structured Streaming forces the source schema into nullable when file-based datasources such as text, json, csv, parquet and orc are used via
spark.readStream(...)
. Previously, it respected the nullability in source schema; however, it caused issues tricky to debug with NPE. To restore the previous behavior, setspark.sql.streaming.fileSource.schema.forceNullable
tofalse
. -
Spark 3.0 fixes the correctness issue on Stream-stream outer join, which changes the schema of state. (See SPARK-26154 for more details). If you start your query from checkpoint constructed from Spark 2.x which uses stream-stream outer join, Spark 3.0 fails the query. To recalculate outputs, discard the checkpoint and replay previous inputs.
-
In Spark 3.0, the deprecated class
org.apache.spark.sql.streaming.ProcessingTime
has been removed. Useorg.apache.spark.sql.streaming.Trigger.ProcessingTime
instead. Likewise,org.apache.spark.sql.execution.streaming.continuous.ContinuousTrigger
has been removed in favor ofTrigger.Continuous
, andorg.apache.spark.sql.execution.streaming.OneTimeTrigger
has been hidden in favor ofTrigger.Once
.