Partitioning in distributed data is quite common concept. Spark is not an exception and it also has some operations related to partitions.
The knowledge of Spark's API is not a single useful thing. It's also so important to know when and by who programs are executed.
As we already know, RDD is the main data concept of Spark. It's created either explicitly or implicitly, through computations called transformations and actions. But these computations are all organized as a graph and scheduled by Spark's components. This graph is called DAG and it's the main topic of this post.
In Spark, actions are the final results of operations on RDDs. Without them, transformations are meaningless and difficult to use by applications.
One of methods generating new RDD consists on applying transformations on already existent RDDs. But transformations not only makes new RDDs but also gives a sense to all data processing.
The first post about Spark internals concerns Resilient Distributed Dataset (RDD), an abstraction used to represent processed data.