In Spark 3.2, the PySpark methods from sql, ml, spark_on_pandas modules raise the TypeError instead of ValueError when are applied to an param of inappropriate type.
TypeError
ValueError
In Spark 3.2, the traceback from Python UDFs, pandas UDFs and pandas function APIs are simplified by default without the traceback from the internal Python workers. In Spark 3.1 or earlier, the traceback from Python workers was printed out. To restore the behavior before Spark 3.2, you can set spark.sql.execution.pyspark.udf.simplifiedTraceback.enabled to false.
spark.sql.execution.pyspark.udf.simplifiedTraceback.enabled
false
In Spark 3.2, pinned thread mode is enabled by default to map each Python thread to the corresponding JVM thread. Previously, one JVM thread could be reused for multiple Python threads, which resulted in one JVM thread local being shared to multiple Python threads. Also, note that now pyspark.InheritableThread or pyspark.inheritable_thread_target is recommended to use together for a Python thread to properly inherit the inheritable attributes such as local properties in a JVM thread, and to avoid a potential resource leak issue. To restore the behavior before Spark 3.2, you can set PYSPARK_PIN_THREAD environment variable to false.
pyspark.InheritableThread
pyspark.inheritable_thread_target
PYSPARK_PIN_THREAD