Apache Spark as you don't know it

It's difficult to see all the use cases of a framework. Back in time, when I was a backend engineer, I never succeeded to see all applications of Spring framework. Now, when I'm a data engineer, I feel the same for Apache Spark. Fortunately, the community is there to show me some outstanding features!

Continue Reading →

Worth reading for data engineers - part 1

Hi and welcome to the new series. This time I won't blog about my discoveries. Instead, I'm going to see other blog posts from the data engineering space and share some key takeaways with you. I don't know how regular it will be yet but hopefully will be able to share some of the notes every month.

Continue Reading →

Useful classes for data engineers - Scala & Java

We all have our habits and as programmers, libraries and frameworks are definitely a part of the group. In this blog post I'll share with you a list of Java and Scala classes I use almost every time in data engineering projects. The part for Python will follow next week!

Continue Reading →

Table file formats - Compaction: Delta Lake

The small files is a well known problem in data systems. Fortunately, modern table file formats have built-in features to address it. In the next series we'll see how.

Continue Reading →

Retrospective: 2022 on waitingforcode.com

A new year is coming and it's a great moment to summarize what has happened in the blog and around!

Continue Reading →

What's new on the cloud for data engineers - part 8 (09-12.2022)

It's the last update on the data engineering news on the cloud this year. There are a lot of things coming out. Especially for the streaming processing!

Continue Reading →

Apache Airflow 2 overview - part 2

Welcome to the 2nd blog post dedicated to Apache Airflow 2 features. This time it'll be more about custom code you can add to the most recent version.

Continue Reading →

Cloud authentication and data processing jobs

Setting a data processing layer up has several phases. You need to write the job, define the infrastructure, CI/CD pipeline, integrate with the data orchestration layer, ... and finally, ensure the job can access the relevant datasets. The most basic authentication mechanism uses login/password pair but can we do better on the cloud? Let's see!

Continue Reading →

Apache Airflow 2 overview - part 1

Apache Airflow 2 introduced a lot of new features. The most visible one is probably a reworked UI but there is more! In this and the next blog post I'll show some of the interesting new Apache Airflow features.

Continue Reading →

Shuffle in PySpark

Shuffle is for me a never-ending story. Last year I spent long weeks analyzing the readers and writers and was hoping for some rest in 2022. However, it didn't happen. My recent PySpark investigation led me to the shuffle.py file and my first reaction was "Oh, so PySpark has its own shuffle mechanism?". Let's check this out!

Continue Reading →

Azure Synapse Link as Hybrid Transactional/Analytical Processing

I've discovered the term from the title while learning Azure Synapse and Cosmos DB services. I had heard of NoSQL, or even NewSQL, but never of a solution supporting analytical and transactional workloads at once.

Continue Reading →

Serializers in PySpark

We've learned in the previous PySpark blog posts about the serialization overhead between the Python application and JVM. An intrinsic actor of this overhead are Python serializers that will be the topic of this article and hopefully, will provide a more complete overview of the Python <=> JVM serialization.

Continue Reading →

Data contracts

Modern data space is an exciting place with a lot of innovation these last years. The single drawback of that movement are all the new buzz words and the time required to understand and classify them into something we could use in the organization or not. Recently I see more and more "data contracts" in social media. It's also a new term and I'd like to see if and how it revolutionizes the data space.

Continue Reading →

Generated method too long to be JIT compiled

There are days like that. You inherit a code and it doesn't really work as expected. While digging into issues you find usual weird warnings but also several new things. For me one of these things was the "Generated method too long to be JIT compiled..." info message.

Continue Reading →

Table file formats - Change Data Capture: Delta Lake

It's time to start the 4th part of the Table file formats series. This time the topic will be Change Data Capture, so how to stream all changes made on the table. As for the 3rd part, I'm going to start with Delta Lake.

Continue Reading →