GCP Dataflow by an Apache Spark guy

Some months ago I wrote a blog post where I presented BigQuery from a perspective of an Apache Spark user. Today I will do the same exercise but applied to the same category of data processing frameworks. In other words, I will try to understand GCP Dataflow thanks to my Apache Spark knowledge!

Continue Reading →

What's new in Apache Spark 3.1 - nodes decommissioning

I have a feeling that a lot of things related to the scalability happened in the 3.1 release. General Availability of Kubernetes that I will cover next week is only one of them. The second one is the nodes decommissioning!

Continue Reading →

Right to be forgotten patterns: vertical partitioning

In my previous post I shared with you an approach called crypto-shredding that eventually can end up as a solution for the "right to be forgotten" point of GDPR. One of its drawbacks was performance degradation due to the need to fetch and decrypt every sensible value. To overcome it, I thought first about a cache but ended up by understanding that it's not the cache but something else! And I will explain this in the blog post.

Continue Reading →

What's new in Apache Spark 3.1 - predicate pushdown for JSON, CSV and Apache Avro

Predicate pushdown is a data processing technique taking user-defined filters and executing them while reading the data. Apache Spark already supported it for Apache Parquet and RDBMS. Starting from Apache Spark 3.1.1, you can also use them for Apache Avro, JSON and CSV formats!

Continue Reading →

Right to be forgotten patterns: crypto-shredding

Thanks to the most recent data regulation policies, we can ask a service to delete our personal data. Even though it seems relatively easy in a Small Data context, it's a bit more challenging for Big Data systems. Hopefully - under the authorization of your legal department - there is a smart solution to that problem called crypto-shredding.

Continue Reading →

What's new in Apache Spark 3.1 - Project Zen

I mentioned it very shortly in the first blog post ever about PySpark. Thanks to the Project Zen initiative, the Python part of Apache Spark will become more Pythonic and user friendly. How? Let's check that in this blog post!

Continue Reading →

ML for data engineers - what I learned when preparing GCP Data Engineer certification

I wrote this blog post a week before passing the GCP Data Engineer exam, hoping it'll help to organize a few things in my head (it did!). I also hope that it'll help you too in understanding ML from a data engineering perspective!

Continue Reading →

What's new in Apache Spark 3.1 - Structured Streaming

Aside from the joins presented in the previous blog post, Structured Streaming also got a few other interesting new features that I will present here.

Continue Reading →

AWS Redshift vs GCP BigQuery

Despite the recent architectural proposals with the lakehouse principle, a data warehouse is still an important part of a data system. But there is no "a single way" to do it and if you analyze the cloud providers, you will see various offerings like Redshift (AWS) or BigQuery (GCP), presented in this article.

Continue Reading →

What's new in Apache Spark 3.1 - streaming joins

In the previous blog post, you discovered what changed for joins in Apache Spark 3.1. If you remember the summary sentence, it was not the single join changes in this new release. Apart from them, you can also do a bit more with Structured Streaming joins!

Continue Reading →

What's new in Apache Spark 3.1 - join evolutions

I have waited for writing this blog post since the Data+AI Summit 2020, where Cheng Su presented the ongoing effort to improve shuffle and stream-to-stream joins in Apache Spark 3.1. And in this blog post, I will start by sharing what changed for the joins in the new release of the framework!

Continue Reading →

Unified Data Management patterns

I wrote a lot of blog posts by chance, after losing myself on the Internet. It's also the case of the one you're currently reading. I looked for Delta Lake's learning resources and found an interesting schema depicting the Unified Data Management patterns. Since this term was something new for me, and I like everything with the "pattern" in the name, I couldn't miss the opportunity to explore this topic!

Continue Reading →

DataOps - good and bad points

After introducing DataOps concepts, it's a good time to share my feelings on them ?

Continue Reading →

Under-the-hood: coalesce

That's probably one of the most common questions you may have heard in preliminary job interviews. What's the difference between coalesce and repartition? Many answers exist, but instead of repeating them, I will try to dig a bit deeper in this blog post and see how the coalesce works.

Continue Reading →

GCP BigQuery by an Apache Spark guy

One of the steps in my preparation for the GCP Data Engineer certificate was the work with "Google BigQuery: The Definitive Guide: Data Warehousing, Analytics, and Machine Learning at Scale" book. And to be honest, I didn't expect that knowing Apache Spark will help me so much in understanding the architectural concepts. If you don't believe, I will try to convince you in this blog post.

Continue Reading →