Looking for something else? Check the categories of Data processing:
Apache Beam Apache Flink Apache Spark Apache Spark GraphFrames Apache Spark GraphX Apache Spark SQL Apache Spark Streaming Apache Spark Structured Streaming PySpark
If not, below you can find all articles belonging to Data processing.
Spark Streaming is able to handle state-based operations, ie. operations containing a state susceptible to be modified in subsequent batches of data.
Checkpoint allows Spark to truncate dependencies on previously computed RDDs. In the case of streams processing their role is extended. In additional, they're not a single method to prevent against failures.
Even if Spark Streaming uses globally the same configuration as batch, there are some of entries specific to streaming.
Standard data sources, such as files, queues or sockets are natively implement in Spark Streaming context. But the framework allows the creation of more flexible data consumers called receivers.
Spark Streaming is not static and allows to convert DStreams to new types. It can be done, exactly as for batch-oriented processing, through transformations.
In Spark batch-oriented, RDD was a data abstraction. In Spark Streaming RDDs are still present but for the programmer another data type is exposed - DStream.
Spark has an interesting concept of shared variables among all distributed computations. This special kind of objects is called broadcast variables. But it's not the single possibility to share objects in Spark. The second one are accumulators.
Partitioning in distributed data is quite common concept. Spark is not an exception and it also has some operations related to partitions.
The knowledge of Spark's API is not a single useful thing. It's also so important to know when and by who programs are executed.
As we already know, RDD is the main data concept of Spark. It's created either explicitly or implicitly, through computations called transformations and actions. But these computations are all organized as a graph and scheduled by Spark's components. This graph is called DAG and it's the main topic of this post.
Spark Streaming is a powerful extension of Spark which helps to work with streams efficiently. In this article we'll present basic concepts of this extension.
In Spark, actions are the final results of operations on RDDs. Without them, transformations are meaningless and difficult to use by applications.
One of methods generating new RDD consists on applying transformations on already existent RDDs. But transformations not only makes new RDDs but also gives a sense to all data processing.
The first post about Spark internals concerns Resilient Distributed Dataset (RDD), an abstraction used to represent processed data.