Java Concurrency in Practice

Author: Brian Goetz, Tim Peierls
All Stack Overflow 149
This Year Stack Overflow 11
This Month Stack Overflow 34


by dbsmith83   2020-11-03
Idk about GP, but one book I highly recommend is Java Concurrency in Practice -

It's old, but the material holds up well since it covers a lot of fundamentals

by ivanr   2020-08-16
Try these as a starting point:

- Think Java (programming, foundational; free)

- Think Data Structures (programming, foundational; free)

- Effective Java (classic)

- Java Concurrency in Practice (classic)

- Continuous Delivery in Java (essential)

by anonymous   2019-07-21

I bought Java Concurrency in Practice as a result of this problem and found they discuss an issue very much like this one in Chapter 7: Cancellation and shutdown which can be summarised by the quote

Interruption is usually the most successful way to implement cancellation

As the HttpClient is blocking socket IO that does not support interruption so I have a two pronged approach. I allocate my httpClient, I check for interruption just before the actual httpClient.execute() call and in my unsubscribe() method interrupt the thread and then call httpClient.getConnectionManager().shutdown();. This seems to take care of my problem and was a very simple change. No more interleaving issues!

I also set the boolean unsubscribe field to volatile as suggested which I shouldve done before - this alone however would not have solved the issue

by anonymous   2019-07-21

There are no such thing as "static classes" in java. There are inner static classes, but i presume that your question its not about this type of classes.

Classes are loaded once per classloader not per Virtual Machine, this is an important diference, for example applications server like tomcat have different classloaders per application deployed, this way each application is independent (not completely independent, but better than nothing).

The effects for multithreading are the effects of shared data structures in multithreading, nothing special in java. There are a lot of books in this subject like (centered in java) or (that explain difference concurrency models, really interesting book)

by anonymous   2019-07-21

I don't mean this as a flippant answer, but: Java Concurrency In Practice, buy it now...

But to be more serious, you are now quickly getting into a very hairy testing scenario. I don't believe that you are going to have to get into using java.util.concurrent.ExecutorService, but if you are going to communicate between threads, you need to be conscious of having thread-safe communication channels (simple Lists, MessageQueues, database tables, etc...) between each simulation thread.

by anonymous   2019-07-21

Buy yourself Java Concurrency in Practice , it really describes everything about Java and multithreading (and what can totally go wrong). It i written by one of the Lead Architects of Java back in the Sun days and the programmer of the java.util.concurrent framework.

by anonymous   2019-07-21

As it happens I was just reading about this this morning on my way to work in Java Concurrency In Practice by Brian Goetz. Basically he says you should do one of two things

  1. Propagate the InterruptedException - Declare your method to throw the checked InterruptedException so that your caller has to deal with it.

  2. Restore the Interrupt - Sometimes you cannot throw InterruptedException. In these cases you should catch the InterruptedException and restore the interrupt status by calling the interrupt() method on the currentThread so the code higher up the call stack can see that an interrupt was issued.

by anonymous   2019-01-13

In general, yes, it is possible to see a field in a partially constructed state if the field is not published in a safe manner. In the particular case of your question, the volatile keyword is a satisfactory form of safe publication. According to Java Concurrency in Practice:

To publish an object safely, both the reference to the object and the object's state must be made visible to other threads at the same time. A properly constructed object can be safely published by:

  • Initializing an object reference from a static initializer.
  • Storing a reference to it into a volatile field.
  • Storing a reference to it into a final field.
  • Storing a reference to it into a field that is properly guarded by a (synchronized) lock.

For more information, see the following:

  • Does a volatile reference really guarantee that the inner state of the object is visible to other threads?
  • Java multi-threading & Safe Publication
  • Java Concurrency in Practice
by anonymous   2019-01-13

And then I am using volatile, so it will guarantee the consistent view of the variable across all threads.

Thread-safety is already guaranteed by atomic variables. volatile is redundant if you won't reassign the variable. You can replace volatile with final here:

private final AtomicInteger atomInt = new AtomicInteger(3);

Does it mean that I have got complete thread safety or still there are chances of "memory consistency issues"?

At this moment, it's absolutely thread-safe. No "memory consistency issues" might happen with the variable. But using proper thread-safe components doesn't mean that the whole class/program is thread-safe. Problems might take place if interactions between them are incorrect.

Using volatile variables reduces the risk of memory consistency errors ...

volatile variables can only guarantee visibility. They don't guarantee atomicity.

As Brian Goetz writes (emphasis mine):

volatile variables are convenient, but they have limitations. The most common use for volatile variables is as a completion, interruption, or status flag. Volatile variables can be used for other kinds of state information, but more care is required when attempting this. For example, the semantics of volatile are not strong enough to make the increment operation (count++) atomic, unless you can guarantee that the variable is written only from a single thread.

You can use volatile variables only when all the following criteria are met:

  • Writes to the variable do not depend on its current value, or you can ensure that only a single thread ever updates the value;
  • The variable does not participate in invariants with other state variables;
  • Locking is not required for any other reason while the variable is being accessed.

From the docs of the java.util.concurrent.atomic package:

  • get has the memory effects of reading a volatile variable.
  • set has the memory effects of writing (assigning) a volatile variable.
by anonymous   2019-01-13

There are 2 points when reasoning about thread safety of a particulcar class :

  1. Visibility of shared state between threads.
  2. Safety (preserving class invariants) when class object is used by multiple threads through class methods.

Shared state of Example class consists only from one Thing object.

  1. The class isn't thread safe from visibility perspective. Result of setThing by one thread isn't seen by other threads so they can work with stale data. NPE is also acceptable cause initial value of thing during class initialization is null.
  2. It's not possible to say whether it's safe to access Thing class through use method without its source code. However Example invokes use method without any synchronization so it should be, otherwise Example isn't thread safe.

As a result Example isn't thread safe. To fix point 1 you can either add volatile to thing field if you really need setter or mark it as final and initialize in constructor. The easiest way to ensure that 2 is met is to mark use as synchronized. If you mark setThing with synchronized as well you don't need volatile anymore. However there lots of other sophisticated techniques to meet point 2. This great book describes everything written here in more detail.

by anonymous   2019-01-13

From Java Concurrency in Practice 15.2.3 CAS support in the JVM :

On platforms supporting CAS, the runtime inlines them into the appropriate machine instruction(s); in the worst case, if a CAS-like instruction is not available the JVM uses a spin lock.

by anonymous   2019-01-13

It's well described in Java Concurrency in Practice:

The lazy initialization holder class idiom uses a class whose only purpose is to initialize the Resource. The JVM defers initializing the ResourceHolder class until it is actually used [JLS 12.4.1], and because the Resource is initialized with a static initializer, no additional synchronization is needed. The first call to getresource by any thread causes ResourceHolder to be loaded and initialized, at which time the initialization of the Resource happens through the static initializer.

Static initialization

Static initializers are run by the JVM at class initialization time, after class loading but before the class is used by any thread. Because the JVM acquires a lock during initialization [JLS 12.4.2] and this lock is acquired by each thread at least once to ensure that the class has been loaded, memory writes made during static initialization are automatically visible to all threads. Thus statically initialized objects require no explicit synchronization either during construction or when being referenced.

by anonymous   2018-03-19

Although there are some good answers already posted, but here is what I found while reading Java Concurrency in Practice Chapter 3 - Sharing Objects.

Quote from the book.

The publication requirements for an object depend on its mutability:

  • Mutable objects can be published through any mechanism;
  • Effectively immutable objects (whose state will not be modified after publication) must be safely published;
  • Mutable objects must be safely published, and must be either threadsafe or guarded by a lock.

Book states ways to safely publish mutable objects:

To publish an object safely, both the reference to the object and the object's state must be made visible to other threads at the same time. A properly constructed object can be safely published by:

  • Initializing an object reference from a static initializer;
  • Storing a reference to it into a volatile field or AtomicReference;
  • Storing a reference to it into a final field of a properly constructed object; or
  • Storing a reference to it into a field that is properly guarded by a lock.

The last point refers to using various mechanisms like using concurrent data structures and/or using synchronize keyword.

by anonymous   2018-03-19

An explanation is given with an example in Java Concurrency In Practice chapter 4 section 4.3.5.

public class SafePoint {
    private int x, y;

    private SafePoint(int[] a) {
        this(a[0], a[1]);

    public SafePoint(SafePoint p) {

    public SafePoint(int x, int y) {
        this.x = x;
        this.y = y;

    public synchronized int[] get() {
        return new int[] { x, y };

    public synchronized void set(int x, int y) {
        this.x = x;
        this.y = y;

The private constructor exists to avoid the race condition that would occur if the copy constructor were implemented as this(p.x, p.y).

What does it mean is, if you did not have a private constructor and you implement copy constructor in following way:

public SafePoint(SafePoint p) {
    this(p.x, p.y);

Now assume that thread A is having the access to SafePoint p is executing above copy constructor's this(p.x, p.y) instruction and at the unlucky timing another thread B also having access to SafePoint p executes setter set(int x, int y) on object p. Since your copy constructor is accessing p's x and y instance variable directly without proper locking it could see inconsistent state of object p.

Where as the private constructor is accessing p's variables x and y through getter which is synchronized so you are guaranteed to see consistent state of object p.

by nmg   2017-12-01
In the talk "Clojure Concurrency" [1], Rich Hickey demands that everyone in the room read "Java Concurrency in Practice" [2]. "It will scare the crap out of you."

[1] [2]

by mindcrime   2017-10-02
Wow, that's actually kinda tough to answer. There's been a LOT of changes in the Java world in the last 10 years - as you might guess. And I mean both in terms of the language itself, and the tooling and ecosystem as far as libraries and frameworks, etc.

The answer also depends a bit on what exactly you want to do. If you're mainly interested in web apps, it's one thing, "big data" is another world altogether, mobile (Android) has its own ecosystem, etc.

All of that said, here are some thoughts:

Generics and the newer Collections related stuff is one area that changed a lot. There's online documentation at:

To get started with the Java 8 stuff, a book like "Java 8 in Action" would be good.

Another good intro the Java 8 era stuff is

And to make it even harder, Java 9 just dropped, so there's even more new stuff. I just picked up this book myself, but haven't had a lot of time to dig into it yet.

For frameworks, Spring and Hibernate are both still popular and it wouldn't hurt to brush up on both of those. Spring Boot in particular has caught on for a lot of Java developers.

Also, Tomcat is still very popular for hosting java Web applications and services of various sorts. JBoss / Wildfly is still around, but JEE (as J2EE is now known) is not as popular as in the past (even though it has actually improved a LOT).

Play and Dropwizard are two more frameworks you might want to familiarize yourself with

In terms of tools, Eclipse is still popular, IntelliJ is probably the most popular Java IDE these days, and Netbeans seems to have faded from view a bit. Ant has fallen out of favour for builds, with most devs now using either Maven or Gradle. Read up on / play around with both of those and you'll be in good shape there.

Also, Java shops have also been affected by the overall move to "The Cloud" and you can't really ignore that either. If you haven't already, you'll probably want to familiarize yourself with AWS and the AWS SDK.

If you want to work/play in the "big data" space, you'll need some combination of Hadoop, Kafka, Spark, Hive, Storm, Flume, HBase, Impala, etc., etc., etc.

by anonymous   2017-08-20

If you dislike Times New Roman, just change the browser default font to Tahoma or something like.

alt text

Then start here and click your way through Next link. Then there are the API docs, each with examples in the introductory text. E.g. ExecutorService. Then there are books, like Concurrency in Practice.

by anonymous   2017-08-20

Assuming that your problem is a slow external API, a solution could be the use of either threaded programming or asynchronous programming. By default when doing IO, your code will block. This basically means that if you have a method that does an HTTP request to retrieve some JSON your method will tell your operating system that you're going to sleep and you don't want to be woken up until the operating system has a response to that request. Since that can take several seconds, your application will just idly have to wait.

This behavior is not specific to just HTTP requests. Reading from a file or a device such as a webcam has the same implications. Software does this to prevent hogging up the CPU when it obviously has no use of it.

So the question in your case is: Do we really have to wait for one method to finish before we can call another? In the event that the behavior of method_two is dependent on the outcome of method_one, then yes. But in your case, it seems that they are individual units of work without co-dependence. So there is a potential for concurrency execution.

You can start new threads by initializing an instance of the Thread class with a block that contains the code you'd like to run. Think of a thread as a program inside your program. Your Ruby interpreter will automatically alternate between the thread and your main program. You can start as many threads as you'd like, but the more threads you create, the longer turns your main program will have to wait before returning to execution. However, we are probably talking microseconds or less. Let's look at an example of threaded execution.

def main_method { method_one } { method_two } { method_three }

def method_one
  # something_slow_that_does_an_http_request

def method_two
  # something_slow_that_does_an_http_request

def method_three
  # something_slow_that_does_an_http_request

Calling main_method will cause all three methods to be executed in what appears to be parallel. In reality they are still being sequentually processed, but instead of going to sleep when method_one blocks, Ruby will just return to the main thread and switch back to method_one thread, when the OS has the input ready.

Assuming each method takes two 2 ms to execute minus the wait for the response, that means all three methods are running after just 6 ms - practically instantly.

If we assume that a response takes 500 ms to complete, that means you can cut down your total execution time from 2 + 500 + 2 + 500 + 2 + 500 to just 2 + 2 + 2 + 500 - in other words from 1506 ms to just 506 ms.

It will feel like the methods are running simultanously, but in fact they are just sleeping simultanously.

In your case however you have a challenge because you have an operation that is dependent on the completion of a set of previous operations. In other words, if you have task A, B, C, D, E and F, then A, B, C, D and E can be performed simultanously, but F cannot be performed until A, B, C, D and E are all complete.

There are different ways to solve this. Let's look at a simple solution which is creating a sleepy loop in the main thread that periodically examines a list of return values to make sure some condition is fullfilled.

def task_1
# Something slow
return results

def task_2
# Something slow
return results

def task_3
# Something slow
return results

my_responses = {} { my_responses[:result_1] = task_1 } { my_responses[:result_2] = task_2 } { my_responses[:result_3] = task_3 }

while (my_responses.count < 3) # Prevents the main thread from continuing until the three spawned threads are done and have dumped their results in the hash.
  sleep(0.1) # This will cause the main thread to sleep for 100 ms between each check. Without it, you will end up checking the response count thousands of times pr. second which is most likely unnecessary.

# Any code at this line will not execute until all three results are collected.

Keep in mind that multithreaded programming is a tricky subject with numerous pitfalls. With MRI it's not so bad, because while MRI will happily switch between blocked threads, MRI doesn't support executing two threads simultanously and that solves quite a few concurrency concerns.

If you want to get into multithreaded programming, I recommend this book:

It's centered around Java, but the pitfalls and concepts explained are universal.

by anonymous   2017-08-20

First of all, take a look at this SO on reasons not to use Vector. That being said:

1) Vector locks on every operation. That means it only allows one thread at a time to call any of its operations (get,set,add,etc.). There is nothing preventing multiple threads from modifying Bs or their members because they can obtain a reference to them at different times. The only guarantee with Vector (or classes that have similar synchronization policies) is that no two threads can concurrently modify the vector and thus get into a race condition (which could throw ConcurrentModificationException and/or lead to undefined behavior);

2) As above, there is nothing preventing multiple threads to access Cs at the same time because they can obtain a reference to them at different times.

If you need to protect the state of an object, you need to do it as close to the state as possible. Java has no concept of a thread owning an object. So in your case, if you want to prevent many threads from calling setSameString concurrently, you need to declare the method synchronized.

I recommend the excellent book by Brian Goetz on concurrency for more on the topic.