Storage articles

Looking for something else? Check the categories of Storage:

Apache Avro Apache Cassandra Apache Hudi Apache Iceberg Apache Parquet Apache ZooKeeper Delta Lake Elasticsearch Embedded databases HDFS MySQL PostgreSQL Time series

If not, below you can find all articles belonging to Storage.

Gnocchi architecture

Understanding the architecture is the key of working properly with any distributed system. It's why the series of post about Gnocchi starts by exploring its components.

Continue Reading →

Choosing time-series database for study

In order to learn a new thing, nothing better than try it. However in some cases the choice of the tool to study is not easy. It's especially true in the context of data storage and though also in the context of time-series databases introduced in one of previous posts.

Continue Reading →

Time series - general notes

Temporal data is a little bit particular. It can be generated very frequently, as for instance every 500 ms or less. It's then important to store it efficiently and to allow quick and flexible reads. It's also important to know the specificities of time-series as a popular case of temporal data.

Continue Reading →

Range query algorithm in Apache Cassandra

When I was learning about the secondary index in Cassandra, I've found the mention of special Cassandra's algorithm used to range and secondary index queries. After some time passed on exploring secondary index mechanism, it's a good moment to discover the algorithm making it work.

Continue Reading →

Compression in Parquet

Last time we've discovered different encoding methods available in Apache Parquet. But the encoding is not the single technique helping to reduce the size of files. The other one, very similar, is the compression.

Continue Reading →

Nested data representation in Parquet

Working with nested structures appears as a problem in column-oriented storage. However, thanks to Google's Dremel solution, this task can be solved efficiently.

Continue Reading →

Encodings in Apache Parquet

An efficient data storage is one of success keys of a good storage format. One of methods helping to improve that is an appropriate encoding and Parquet comes with several different methods.

Continue Reading →

Schema versions in Parquet

When I've started to play with Apache Parquet I was surprised about 2 versions of writers. Before approaching the rest of planed topics, it's a good moment to explain these different versions better.

Continue Reading →

Data storage in Apache Parquet

Previously we focused on types available in Parquet. This time we can move forward and analyze how the framework stores the data in the files.

Continue Reading →

Data types in Apache Parquet

Data in Apache Parquet files is written against specific schema. And who tells schema, invokes automatically data types for the fields composing this schema.

Continue Reading →

Introduction to Apache Parquet

Very often an appropriate storage is as important as the data processing pipeline. And among different possibilities we can still store the data in files. Thanks to different formats, such as column-oriented ones, some of actions in reading path can be optimized.

Continue Reading →

Dockerize Cassandra troubleshooting

Some time ago I tried to create Docker image with Cassandra and some other programs. For the "others", the operation was quite easy but Cassandra caused some problems because of several configuration properties.

Continue Reading →

Recovery in HDFS

Recovery process in HDFS helps to achieve fault tolerance. It concerns as well worker pipeline as blocks.

Continue Reading →

HDFS on disk explained

Among all previous posts we could learn a lot about HDFS transaction logs, operations on closed files and so on. Thanks to that we can take a look now on data structure of NameNode and DataNode.

Continue Reading →

Checkpoint in HDFS

HDSF is not an exception in the Big Data world and as other actors, it also uses checkpoints.

Continue Reading →

Handling small files in HDFS

HDFS is not well suited tool to store a lot of small files. Even if that's true, some methods exist to handle small files better.

Continue Reading →

Snapshot in HDFS

Implementing snapshots in distributed file systems is not a simple job. It must take into account different aspects, such as file deletion or content changes, and keep file system consistent among them.

Continue Reading →

Cache in HDFS

Hadoop 2.3.0 brought an in-memory caching layer to HDFS. Even if this is quite old feature (released in 02/2014), it's always beneficial to know it.

Continue Reading →

Append and truncate in HDFS

Making an immutable distributed file system is easier than building a mutable one. HDFS, even if initially was destined to not changing data, supports mutability through 2 operations: append and truncate.

Continue Reading →

FSImage in HDFS

Edit log would be useless without its complementary structure called FSImage.

Continue Reading →