Looking for something else? Check the categories of Storage:
Apache Avro Apache Cassandra Apache Hudi Apache Iceberg Apache Parquet Apache ZooKeeper Delta Lake Elasticsearch Embedded databases HDFS MySQL PostgreSQL Time series
If not, below you can find all articles belonging to Storage.
HDFS stores everything that happens on transaction log files. They're used during checkpoint and file system recovery. So, they take quite important place in HDFS architecture.
Replica and blocks are HDFS entities having more than 1 state in their whole lifecycle. Being aware of these states helps a lot to understand what happens when a new file is written.
HDFS is a fault-tolerant and resilient distributed file system. It wouldn't be possible without, among others, blocks replication.
Previous article presented theoretical information about HDFS files. This post deepens this topic.
Files in HDFS are different from files from local file system. They're fault-tolerant, can be stored in different abstractions and are based on quite big blocks comparing to blocks in local file system.
HDFS is one of most popular distributed file systems in our days. It changes from other older distributed file systems thanks to its reliability.
Before writing some code in Apache Cassandra, we'll try to explore very interesting dependency - cassandra-driver-mapping.
I/O operations are slower than memory lookups. It's the reason why memory cache helps to improve performances, in Cassandra too.
One of interesting data types used in Apache Cassandra are collections. In our model we can freely use maps, sets or lists.
Because tables in Apache Cassandra are very similar to the tables of relational databases, this article describing them won't focus on basic points. Instead, we'll explore more Cassandra specific subjects, such as configuration or different types.
Disk compaction helps to save space. Since Cassandra is supposed to store a lot of data, it can't miss this useful process.
Since Cassandra is distributed storage system, it holds data in different nodes. But how it determines data should be stored by each node ? It's the role of partitioners.
Keeping old data eternally takes place and makes reads longer. Apache Cassandra is not an exception and has a mechanism to remove data.
Previously we've presented theory of data consistency in Cassandra. Now it's a good moment to show some examples of consistency levels.
Distributed data brings a new problem to historical standalone relational databases - data consistency. Cassandra deals with this problem pretty nice with its different consistency levels.
Until now we're working with Cassandra without looking on what happens. It's a time to be a little bit more curious.
The previous article introduced us to Apache Cassandra by presenting vaguely its main concepts. This article focuses more in details on data topics.
After some articles about data ingestion and serialization in Big Data applications, it's time to start to learn about storage. This part begins with Apache Cassandra.
This article presents basic concepts of Apache Cassandra. In the first part it tries to explain architecture and general concepts of this solution. The second part is focused more on developer topics and it describes some main points about data organization.
A lot of programming tools implement event-driven approach. Apache ZooKeeper isn't an exception for this rule with its system of watchers.
Apache ZooKeeper is very often compared to distributed file system. Because each file system has a feature to deal with file permissions, ZooKeeper, as a kind of file system, can't be different.