Java memory model

In multi-threading languages as Java, a memory model is mandatory to be able to determine correctly code behavior on concurrent tasks execution.

This article will be an introduction to Java's memory model specification. In the first part we'll see the key concepts to understand how memory works. After that, we'll try to define the concept of memory model for multi-threading languages. At the end we'll move some of general behaviors and ideas into Java's world, without consecrating to them a lot of place. It'll be make in one of next articles.

Reminder about memory

Before analyzing memory as some complicated piece of concurrent programming, let's start by remind some basics about memory working. To define memory shortly, we can tell that it's a container that holds all information of actually running programs. Programs can have also the data stored in the memory, known also by acronym RAM (Random Access Memory, another used terms are: primary storage, primary memory, main storage, internal storage, main memory). This container stores the data within specific addresses, for example you can retrieve data belonging to program 'A' at 0 address and data of 'B' program at address 39. Addresses are like a mailboxes. They help to find needed information simpler and at any time. But stored information can change at every moment, exactly as the names written at mailboxes.

Another important participant of program execution is central processing unit (CPU). It's a control center that transforms instructions received from hardware and software (input) to expected output (for example: action as opening of new program). To make this transformation, CPU uses, two units: arithmetic logic (executes arithmetic and logic operations as 1 > 2 ?) and control (sends electrical signals to computer system can execute stored program instructions). We can now deduce that CPU is closely associated with memory that holds data and instructions to processing. First, this information is sent to memory from control unit. Next, arithmetic logic unit gets them and make necessary operations before returning them back to memory. These stages also known as: fetch (the instruction), decode (the instruction to understand what should be done and which data), execute (the instruction) and write (back into memory). When treated data is ready to output (for output device or secondary storage device as hard disk), it's also deleted from the memory.

An interesting concept resulting from interaction between CPU units and memory, is CPU cache. It's used by CPU to limit the access to the main memory. It stores the copy of data frequently read from the memory. The CPU cache is also one of elements that can pose problems in execution of multi-threading programs. Imagine following pseudo-code class:

class TestClass {
  
  int telNumber = 00000;
  String name = "";

  void write() {
    telNumber = 39839309;
    name = "O'Lery";
  }
  
   void read() {
    print "Tel number is: "+this.telNumber;
    print "Name is: "+this.name;
  }
  
}

Now, they're two threads operating on the same instance of this class, one will read and another one will write both variables. telNumber field is stored in CPU cache while name one is registered only in main memory. The access for the first field will be faster than in the case of the second. But we can't be sure that reading thread will see the changes made by writing thread because it'll still read from cache and the chances could be made in main memory. The situation becomes more complicated when CPU reorders the execution of lines to improve program performances. This kind of reordering is called out-of-order execution.

To be able to execute some operations before another, we must implement a memory barrier (called also as membar, memory fence or fence instruction). Let's come back to our TestClass and put a memory barrier between two prints in read method. Thanks to this barrier, CPU will execute the print of tel number before the print of name String. 4 types of memory barriers exist (load and store are the memory operations, the first to memory-read and the second to memory-write):
- StoreStore: guarantees that the writes separated by memory barrier are visible one before the other. For example:

set x = 10;
StoreStore barrier
set y = 20;

In this situation, thanks to StoreStore we're sure that all readers will see x set to 10 before y set to 20. An example of StoreStore could be flushing of all dirty entities out of the cache.
- LoadLoad: this memory barrier guarantees the respect of loads in memory. Thanks to it, we can read one information before another one:
while (y != 20);
LoadLoad barrier
get x;

Thanks to memory barrier in this simple, we can indicate to another threads executing StoreStore case, that they must know about data reading. The example of LoadLoad could be removing of all cached entities.
- LoadStore: ensures that the loads all made before writes:
get b;
LoadStore barrier
set a = 30;

And the other thread:
get a;
LoadStore barrier
set b = 20;

- StoreLoad: mix both write and get operation, it ensures that all writes made before the memory barrier are visible to other processors and that all loads after the barrier receive always the latest value, for example:
set a = 30;
StoreLoad barrier
get b;

And the other thread:
set b = 20;
StoreLoad barrier
get a;

Notice here that reordering can be also made by programming language compilers, always to improve performances.

What is memory model ?

As we mentioned earlier, compiler is also able to make some reordering at the name of optimization. But it shouldn't do it without considering memory rules. We can distinguish two memory models:
- strong: in this model, all writes of one CPU core are visible by other CPU cores. They're visible in execution order. From described 4 types of memory barriers, only one is available in this type: StoreLoad.
- weak: here, all 4 memory barriers can occur because the instructions reordering can be made as well by compiler as by processor.

To simplify, the memory model describes which operations should be visible at given moment. It defines also the behavior for specific situations, as synchronization when two threads try to access the same variable or method at given time.

Java Memory Model (JMM)

The first Java Memory Model (JMM), defined in 1995, were criticized. It wasn't able to use many runtime optimizations and protect code against concurrency issues (for example some final fields were observed to change theirs values or writes on volalite fields were reordered with non-volatile ones and produce a non-intuitive environment). It's only in 2004 when new JMM took effect. It had to reply to following interrogations:
- How to simplify synchronized syntax ?
- How to facilitate the understanding of multi-threading programs execution ?
- How to guarantee initialization safety where final fields without synchronization aren't corrupted in multi-thread environment ?
- How to plug proved JMM considerations on popular hardware architectures ?

Among the concepts strongly associated with JMM, we can distinguish:

This articles shows that some of strange situations occur when we've compiled our Java code. The strange situations those the goal is to optimize the execution and, thanks to defined Java Memory Model, guarantee the execution with the respect of all happens-before rules. A lot of Java tools are there to simplify this, volatile reads that prevents cache read and write operations, synchronized block that can be acquired only by one monitor at the given moment and even final fields.


If you liked it, you should read:

📚 Newsletter Get new posts, recommended reading and other exclusive information every week. SPAM free - no 3rd party ads, only the information about waitingforcode!