I'm the author of Data Engineering Design Patterns (O'Reilly),
a Databricks MVP, and
a freelance data engineer specializing in Apache Spark and Databricks.
I help teams move from working pipelines to resilient architectures.
I'm currently accepting new projects for Jun 2026. Whether you need a 2-day architectural audit, a hands-on lead for a
complex data engineering problem, or a workshop
let's discuss your project here.
Usually you can define a schema on your input data from .schema(...) function of Apache Spark SQL: val ordersFromJson = testedSparkSession.read.schema(orderSchema) .json(ordersToAddFile) Ho...