The "Data is the new Oil" is one of popular sentences describing the huge role of data in our world. And as other resources, data must be extracted too. To find these "Oil workers", organizations look for, among others, data engineers. The task is more or less easier and this difficulty depends on various factors. From my 6-years perspective, one of the key starting elements is the job announcement.
It's time for the last "What's new in Apache Spark 3.3.0..." before a break. Today we'll see what changed in PySpark. Spoiler alert: Pandas users should find one feature very exciting!
Even though the Project Lightspeed is not there yet, Apache Spark Structured Streaming 3.3.0 has several interesting features that should make your daily life easier.
After a break for the Data+AI Summit retrospective, it's time to return to Apache Spark 3.3.0 and see what changed for the DataSource V2 API.
Yesterday I shared with you the human part of my Data+AI Summit. It's time now to give you my takeaways from the technical talks.
There will be many "first times" in our lives. For me, the Data+AI Summit 2022 was the first time I've visited the USA, put the 3D dimensions to the pictures of my virtual friends and felt a huge community support in a very troubled moment. Besides, I also enjoyed the talks and walking, even though the latter one wasn't so good for my skin ;)
When I prepare the "What's new on the cloud..." series, I'm pretty sure that for Azure the most updates will go to the Azure SQL service. The main idea of the service is simple but if you analyze it more deeply, you'll find some concepts that might not be the easiest to understand at first.
New Apache SQL functions are a regular position in my "What's new in Apache Spark..." series. Let's see what has changed in the most recent (3.3.0) release!
More and more often in my daily contact with the data world I hear this word "modern". And I couldn't get it. I was doing cloud data engineering with Apache Spark/Apache Beam, so it wasn't modern at all? No idea while I'm writing this introduction. But I hope to know more about this term by the end of the article!
Joins are probably the most popular operation for combining datasets and Apache Spark supports multiple types of them already! In the new release, the framework got 2 new strategies, the storage-partitioned and row-level runtime filters.
It's time for the last data generation part of the ACID file formats series. This time we'll see how Delta Lake writes new files.
The topic of this blog post is not new because the discussed sort algorithms are there from Apache Spark 2. But it happens that I've never had a chance to present them and today I'll try to do it now.
Last time you discovered data writing in Apache Hudi. Today it's time to see the 2nd file format from my list, Apache Iceberg.
I remember the first PySpark codes I saw. They were pretty similar to the Scala ones I used to work with except one small detail, the yield keyword. Since then, I've understood their purpose but have been actively looking for an occasion to blog about them. Growing the PySpark section is a great opportunity for this!
It's only when I was preparing the 2nd blog post of the series that I realized how bad my initial plan was. The article you're currently reading had been initially planned as the 6th of the series. But indeed, how could we understand more advanced features without discovering the writing path first?