Design patterns applied to the data

GoF Design Patterns are pretty easy to understand if you are a programmer. You can read one of many books or articles, and analyze their implementation in the programming language of your choice. But it can be less obvious for data people with a weaker software engineering background. If you are in this group and wondering what these GoF Design Patterns are about, I hope this article will help a bit.

Continue Reading →

Isolation level in Apache Kafka consumers

Who says transaction, automatically invokes isolation levels, so what can be viewed by the consumer from uncommitted transactions. Apache Kafka also implements this concept and I will take a closer look on it in this blog post.

Continue Reading →

Drop is a...select

Have you ever wondered what is the relationship between drop and select operations in Apache Spark SQL? If not, I will shed some light on them in this short blog post.

Continue Reading →

Data+AI Summit follow-up: global limit and state management

It's the second follow-up Data+AI Summit post but the first one focusing on the stateful operations and their interaction with the state store.

Continue Reading →

Project Oryx - Lambda architecture for data science

Lambda architecture is one of the first officially defined Big Data architectures. However, after few time it was replaced by simpler approaches like Kappa. But despite that, you can still find the projects on Lambda and one of them which grabbed my attention is Project Oryx.

Continue Reading →

Data+AI follow-up: StateStoreRDD - building block for stateful processing

The main Apache Spark component enabling stateful processing is StateStoreRDD. It creates a partition-based state store instance but also triggers state-based computation.

Continue Reading →

What's new on the cloud for data engineers - part 1 (08-10.2020)

Cloud computing is present in my life for 4 years and I never found a good system to keep myself up to date. It's even more critical at this moment, when I'm trying to follow what happens on the 3 major providers (AWS, Azure, GCP). Since blogging helped me to achieve that for Apache Spark, and by the way learn from you, I'm gonna try the same solution for the cloud.

Continue Reading →

What's new in Apache Spark 3.0 - Kubernetes

I believe Kubernetes is the next big step in the framework after proposing Catalyst Optimizer, modernizing streaming processing with Structured Streaming, and introducing Adaptive Query Execution. Especially that Apache Spark 3 brings a lot of changes in this part!

Continue Reading →

Broadcasting in Structured Streaming

Some time ago @ArunJijo36 mentioned me on Twitter with a question about broadcasting in Structured Streaming. If, like me at this time, you don't know what happens, I think that this article will be good for you 👊

Continue Reading →

What's new in Apache Spark 3.0 - GPU-aware scheduling

GPU-awareness was one of the topics I postponed the most in my Apache Spark 3.0 exploration. But its time has come and in this blog post you will discover what changed in the version 3 of the framework regarding the GPU-based computation.

Continue Reading →

File source and its internals

Few months ago, before the Apache Spark 3.0 features series, you probably noticed a short series about files processing in Structured Streaming. If you enjoyed it, here is a complementary note presenting the file data source :)

Continue Reading →

What's new in Apache Spark 3 - Structured Streaming

Apache Kafka changes in Apache Spark 3.0 was one of the first topics covered in the "what's new" series. Even though there were a lot of changes related to the Kafka source and sink, they're not the single ones in Structured Streaming.

Continue Reading →

An ideal cloud for a data engineer

I had a chance to use, for a longer or shorter period of time, 3 different cloud providers. In this post I would like to share with you, how my perfect cloud provider could look like.

Continue Reading →

What's new in Apache Spark 3.0 - UI changes

Apart from data processing-related changes, Apache Spark 3.0 also brings some changes at the UI level. The interface is supposed to be more intuitive and should help you understand processing logic better!

Continue Reading →

Data deduplication with an intermediate data store

Last year I wrote a blog post about a batch layer in streaming-first architectures like Kappa. I presented there a few approaches to synchronize the streaming broker with an object or distributed file systems store, without introducing the duplicates. Some months ago I found another architectural design that I would like to share with you here.

Continue Reading →