I/O operations are slower than memory lookups. It's the reason why memory cache helps to improve performances, in Cassandra too.
Data Engineering Design Patterns
Looking for a book that defines and solves most common data engineering problems? I'm currently writing
one on that topic and the first chapters are already available in π
Early Release on the O'Reilly platform
I also help solve your data engineering problems π contact@waitingforcode.com π©
This article shows cache layer in Apache Cassandra. Through the first 3 parts we describe available caches: key, row and counter. In the last part we can see implemented cache in action.
Key caching in Cassandra
To understand the role of this cache, we must remind how data lookup is made by Cassandra. The read request arrives first to Bloom filter. It decides if needed data can be stored in one of managed SSTables. If it's not the case, the action stops. If it is, Cassandra asks partition key cache where partition holding the data begins in the SSTables. Thanks to that, Cassandra goes directly to the row containing expected data. Without the use of key cache, Cassandra should look first at index and scan in to find good key range for the queried data.
Key cache is configurable through several entries in cassandra.yaml file:
- key_cache_size_in_mb: the maximum size of the key cache in memory. By default in 3.4 release, this value is equal 5% of JVM heap or 100 mb if 5% of heap is greater than 100mb.
- key_cache_save_period: determines the number of ms after which key caches will persist to disk. The default value is 4 hours. Thanks to this persistance mechanism, Cassandra improves the performances even after reboot, when memory is cleaned.
- key_cache_keys_to_save: number of keys to save. By default this entry is not used.
This cache should not take a lot of space and its use is encouraged to avoid too many disk seeks during rows reading.
Row caching in Cassandra
The second type of cache concerns all columns of data and it's called row cache. It puts a part of partition into memory. In consequence, if asked data is already kept in row cache, Cassandra doesn't make operations described previously (asking key cache, reading row from disk).
However, row cache should be used carefully. First of all, if one of stored rows changes, it and its partition must be invalidated from cache. So, row cache should privilege rows frequently read but not frequently modified. In additional, in the versions before 2.1, row cache worked by whole partition. It means that whole partition had to be stored in the cache. And if its size was bigger than the size of available cache memory, the row never was cached. Since 2.1 release, it has changed. Now we can configure row cache to kept only specific number of rows.
Regarding to the configuration, the entries are similar to the ones of key cache, with some exceptions:
- row_cache_size_in_mb - as for key cache, the maximum size. However the rule of 100MB is not applied. By default, the row cache is disabled (value is 0).
- row_cache_save_period - the time after which the cache is persisted to disk.
- row_cache_keys_to_save - the number of row cache keys to save.
- row_cache_class_name - the implementation for row cache management.
Apart from that, row caching can be configured at table creation through caching clause.
Counter cache in Cassandra
Another cache type is counter cache, related to counter columns. Its role consists to help locks contention for frequently updated cells of counter type. As other cache types, this one is configurable in the same manner. We can retrieve the same 3 elements, only prefixed with counter_cache_ instead of row_cache_ or key_cache_: counter_cache_size_in_mb, counter_cache_save_period, counter_cache_keys_to_save.
Example of cache in Cassandra
Key can be checked easily with nodetool info command, executed from Cassandra's bin directory. Its output gives an information about the quantity of key cache stored in running engine:
bartosz@home:~/apache-cassandra-3.4/bin$ ./nodetool info ID : 607629b7-75eb-4640-a9ac-adf9150e9b61 Gossip active : true Thrift active : false Native Transport active: true Load : 317,42 KB Generation No : 1460969767 Uptime (seconds) : 6761 Heap Memory (MB) : 369,46 / 974,00 Off Heap Memory (MB) : 0,00 Data Center : datacenter1 Rack : rack1 Exceptions : 0 Key Cache : entries 24, size 1,94 KB, capacity 48 MB, 687 hits, 867 requests, 0,792 recent hit rate, 100 save period in seconds Row Cache : entries 0, size 0 bytes, capacity 100 MB, 6 hits, 12 requests, 0,500 recent hit rate, 0 save period in seconds Counter Cache : entries 0, size 0 bytes, capacity 24 MB, 0 hits, 0 requests, NaN recent hit rate, 7200 save period in seconds Token : (invoke with -T/--tokens to see all 256 tokens)
As you can see, the time of saved cache is quite small (100 ms). It's express to show how to detect key cache persisting activity. This activity can be monitored through logs because each time when one key cache is persisted, following entries appear:
INFO [CompactionExecutor:6] AutoSavingCache.java:386 - Saved KeyCache (24 items) in 74 ms INFO [CompactionExecutor:5] AutoSavingCache.java:386 - Saved KeyCache (24 items) in 46 ms INFO [CompactionExecutor:5] AutoSavingCache.java:386 - Saved KeyCache (24 items) in 52 ms INFO [CompactionExecutor:5] AutoSavingCache.java:386 - Saved KeyCache (24 items) in 67 ms INFO [CompactionExecutor:5] AutoSavingCache.java:386 - Saved KeyCache (24 items) in 70 ms
For row cache it's easier to test through Java API. Each time when cache lookup is made, an event "Row cache hit" is registered. Below you can find some test cases to illustrate that:
@Test public void should_read_data_from_row_cache() { for (int i = 0; i < 10; i++) { Statement queryInsert = QueryBuilder.insertInto("rowcachetest", "players_default_cached") .value("name", "Player").value("teamName", "Team_"+i); SESSION.execute(queryInsert); } // Shouldn't use row cache Statement read1Query = QueryBuilder.select().from("rowcachetest", "players_default_cached") .where(QueryBuilder.eq("name", "Player")).enableTracing(); ResultSet resultSet = SESSION.execute(read1Query); ExecutionInfo info = resultSet.getExecutionInfo(); assertThat(info.getQueryTrace().getEvents()).extracting("name").contains("Row cache miss") .doesNotContain("Row cache hit"); int noCacheEventsLength = info.getQueryTrace().getEvents().size(); info.getQueryTrace().getEvents().forEach(event -> System.out.println("Event without cache: "+event)); // After the first lookup, the row should be put inside cache resultSet = SESSION.execute(read1Query); info = resultSet.getExecutionInfo(); assertThat(info.getQueryTrace().getEvents()).extracting("name").contains("Row cache hit") .doesNotContain("Row cache miss"); assertThat(info.getQueryTrace().getEvents().size()).isLessThan(noCacheEventsLength); info.getQueryTrace().getEvents().forEach(event -> System.out.println("Event with cache: "+event)); } @Test public void should_read_data_from_row_cache_only_once_because_of_data_change() throws InterruptedException { for (int i = 0; i < 10; i++) { Statement queryInsert = QueryBuilder.insertInto("rowcachetest", "players_default_cached") .value("name", "Player2").value("teamName", "Team_"+i); SESSION.execute(queryInsert); } // Shouldn't use row cache Statement read1Query = QueryBuilder.select().from("rowcachetest", "players_default_cached") .where(QueryBuilder.eq("name", "Player2")).enableTracing(); ResultSet resultSet = SESSION.execute(read1Query); ExecutionInfo info = resultSet.getExecutionInfo(); assertThat(info.getQueryTrace().getEvents()).extracting("name").contains("Row cache miss") .doesNotContain("Row cache hit"); int noCacheEventsLength = info.getQueryTrace().getEvents().size(); info.getQueryTrace().getEvents().forEach(event -> System.out.println("Event without cache: "+event)); // After the first lookup, the row should be put inside cache resultSet = SESSION.execute(read1Query); info = resultSet.getExecutionInfo(); assertThat(info.getQueryTrace().getEvents()).extracting("name").contains("Row cache hit") .doesNotContain("Row cache miss"); assertThat(info.getQueryTrace().getEvents().size()).isLessThan(noCacheEventsLength); info.getQueryTrace().getEvents().forEach(event -> System.out.println("Event with cache: "+event)); // Now, add some data and check if read is still made from cache // It shouldn't Statement queryInsert = QueryBuilder.insertInto("rowcachetest", "players_default_cached") .value("name", "Player2").value("teamName", "Team_Bis"); SESSION.execute(queryInsert); resultSet = SESSION.execute(read1Query); info = resultSet.getExecutionInfo(); assertThat(info.getQueryTrace().getEvents()).extracting("name").contains("Row cache miss") .doesNotContain("Row cache hit"); }
The table must be created with caching configures, for example as below:
CREATE TABLE players_default_cached ( name text, teamName text, PRIMARY KEY (name, teamName) ) WITH caching = {'keys':'ALL', 'rows_per_partition':'100'}
Similar tests can be written for counter cache:
@Test public void should_use_counter_cache_when_working_with_counter_column() { for (int i = 0; i < 10; i++) { Statement queryUpdate = QueryBuilder.update("countercachetest", "players_default_counter") .with(incr("games")).where(eq("name", "Player")).and(eq("teamName", "Team")) .enableTracing(); ResultSet resultSet = SESSION.execute(queryUpdate); ExecutionInfo info = resultSet.getExecutionInfo(); // As you can see, when counter cache is enabled, execution traces // should return similar entries to this one: assertThat(info.getQueryTrace().getEvents()).extracting("name").contains("Fetching 1 counter values from cache"); } }
Where tested table looks like:
CREATE TABLE players_default_counter ( name text, teamName text, games counter, PRIMARY KEY (name, teamName) )
In current (3.4) version, Cassandra supports 3 families of cache: key to speed up data look up in SSTables, row to avoid all disk seeks and counter to guarantee quick updates on counter columns. We saw that all of them had similar configuration: with number of stored entries and reserved memory size. Some of them, as row and counter, can be tested directly with query execution traces. The others (key) can be tested with nodetools commands.