Apache Spark on Kubernetes - init containers

Versions: Apache Spark 2.3.1

Initialization is a very first step of almost all applications. Unsurprisingly it's also the case of Kubernetes that uses Init Containers to execute some setup operations before launching the pods.

Data Engineering Design Patterns

Looking for a book that defines and solves most common data engineering problems? I'm currently writing one on that topic and the first chapters are already available in πŸ‘‰ Early Release on the O'Reilly platform

I also help solve your data engineering problems πŸ‘‰ contact@waitingforcode.com πŸ“©

In this post we'll focus on Init Containers feature and its implementation in Apache Spark. The first section will show main ideas of it. The second one will explain how it's implemented in Spark on Kubernetes project.

Init Containers defined

Init Containers are very similar to regular containers. They share the same properties (volumes, resources, secrets, images) and can be defined at least once in a pod declaration. They are submitted to pod's restartPolicy (except Always that is translated to OnFailure).

The main difference between Init and regular containers come from the lifecycle. Init Containers are executed just before the regular ones in order to realize some initialization tasks as: delaying regular containers execution (in completion to readiness probe), installing some common dependencies (e.g. external plugins) or reading data that regular containers cannot access. Since a pod can contain one or more Init Containers, it executes them sequentially, only one at given moment. Each new Init Container is executed only if the previous one succeeded. The execution happens after pod's network and volumes initialization.

Init Containers also use resources attribute but their resolution rule is different. Kubernetes takes resources limits as the highest values among all Init Containers definitions.

In YAML template Init Containers belong to spec.initContainers section. As already mentioned, they are defined just like regular ones, with the possibility to specify volumes, resources requests and limits and security settings. The next section shows an example of them.

Init Containers in Kubernetes scheduler

The role of Init Containers in Apache Spark Kubernetes scheduler is one of mentioned. They're used to download remote dependencies from for instance NFS (S3 or GCS) or remote HTTP endpoints. These remote dependencies can be passed to Apache Spark application through spark.files and spark.jars configuration options, or by spark-submit's --jars and --files flags. The dependencies are downloaded from declared place to the directory configured in spark.kubernetes.mountDependencies.filesDownloadDir (for --files) and spark.kubernetes.mountDependencies.jarsDownloadDir (for --jars) entries. The image for Init Container is specified in spark.kubernetes.initContainer.image property. The dependencies must be downloaded within the timeout specified in spark.kubernetes.mountDependencies.timeout configuration field.

Let's see now how Init Containers integrate with Apache Spark driver and executors. It's quite easy to show that since we can simply specify a file to download from some website. In my case I'll gather the content of http://www.waitingforcode.com with the following command:

./bin/spark-submit  --master k8s://localhost:6445  --deploy-mode cluster  --name spark-pi --class org.apache.spark.examples.SparkPi --conf spark.executor.instances=2  --conf spark.kubernetes.container.image=spark:latest --files http://www.waitingforcode.com --conf spark.app.name=spark-pi local:///opt/spark/jars/spark-examples_2.11-2.3.0.jar  

Init Containers are defined in pod's template like that:

apiVersion: v1
kind: Pod
metadata:
  # ...
  labels:
    spark-app-selector: spark-50e3c4e646c74c688f601b3a9f65309b
    spark-role: driver
  name: spark-pi-697f62f0f4563142b43766a2192ceedb-driver
  # ....
spec:
  containers:
  # ...
  initContainers:
  - args:
    - init
    - /etc/spark-init/spark-init.properties
    image: spark:latest
    imagePullPolicy: IfNotPresent
    name: spark-init
    resources: {}
    terminationMessagePath: /dev/termination-log
    terminationMessagePolicy: File
    volumeMounts:
    - mountPath: /etc/spark-init
      name: spark-init-properties
    - mountPath: /var/spark-data/spark-jars
      name: download-jars-volume
    - mountPath: /var/spark-data/spark-files
      name: download-files-volume
    - mountPath: /var/run/secrets/kubernetes.io/serviceaccount
      name: default-token-jgd7n
      readOnly: true

As you can see, spec.initContainers section has the same properties as the one of regular containers, with volumes, Docker image and even termination customization.

Init Containers are Kubernetes way to ensure that regular containers have everything they need to run. One of their use cases is dependencies download and it's how Spark on Kubernetes project uses them. As shown in the second section the dependencies files and jars are downloaded from a remote place and put inside the directories defined as values for spark.kubernetes.mountDependencies.filesDownloadDir and spark.kubernetes.mountDependencies.jarsDownloadDir entries. Thanks to that drivers and executors are sure they have all dependencies available locally before beginning the processing.