How many types of threads are there

This is how multithreading, concurrency and parallel programming work in Java.


  • What is the difference between multithreading (concurrency) and parallelism?
  • What is the difference between processes and threads?
  • How does the runnable interface work?
  • How to wait for threads using the join keyword.
  • What does the volatile keyword do?
  • What are deadlocks and livelocks?
  • How to use the synchronize keyword to synchronize multiple threads.
  • What are atomic operations?
  • How do the wait and notify keywords work?

I would rather risk dropping everything than go twice!

Last year after the senior Christmas party at my grandmother's, I wanted to put all the coffee cups in the dishwasher in one go.

That went completely wrong and suddenly my grandma didn't have all the cups in the closet.

Your computer also wants to do as much as possible at the same time and has to be careful not to drop the dishes.

And we want to take a closer look at how he does it in this article.

The subject area that deals with this question is the parallel programming multithreading or also called concurrency.

Above all, we want to clarify what the dishes of a computer look like.

The plates are called processes and the cups are called threads.

Parallel and sequential programming

The opposite of parallel programming is sequential programming.

With sequential programming, a program is processed synchronously, i.e. each line is executed in sequence.

Whereas in parallel programming different program parts are processed at the same time.

Due to the multicore processors common today, real parallelism is possible.

Parallelism and concurrency

Each processor core can execute a so-called process.

For example, if your processor has four cores, four processes can be processed at the same time.

This is exactly what we call parallelism!

But what happens as soon as more processes are running than your processor has cores? Will your computer then drop the dishes?

No not yet! Because in this case the so-called concurrency, or as the expert says, the concurrency enters the playing field.

With concurrency or concurrency, the individual processes are assigned time slots in which they can use the existing processor cores.

Of course, these time slots should be chosen so optimally that it appears to the user that all processes are being processed at the same time.

Processes and threads

We can think of a process as an execution unit of a program.

The planning and management of the processes is done by the process scheduler of the operating system.

Every process has everything that is needed to run it. For example, each process has its own memory area reserved.

Different processes can talk to each other and even across systems. This fact is what makes today's cloud computing possible.

The communication between different processes is called Inter-Process-Communication (IPC). A process in turn contains one or more threads.

Threads work similarly to the processes themselves, only that a thread uses the resources of the process to which it belongs.

With the help of threads, we can achieve concurrency within a process.

We can imagine threads as a subdivision of a process into units, which are executed concurrently (in concurrency) on the resources belonging to this process.

Since different threads use the same system resources, resource conflicts can arise that we must be aware of and deal with.

Advantages and disadvantages of multithreading and parallel programming

Is the world of parallel programming and multithreading a pony farm in good weather?

Or are there also dark clouds, heavy rain and a lot of mud?

Of course, due to the multithreading, we can write programs that make better use of the available resources, are therefore more efficient and can react to several things at the same time. Which in turn leads to a better user experience.

But are there any disadvantages? Unfortunately the answer is yes!

Developing programs that use multithreading or parallel programming is far more complex.

Because the same memory area is shared by different threads, errors that are very difficult to find can occur very quickly.

Since the termination, i.e. the time slots in which a thread is allowed to use certain resources, can be different with each program run, this type of error is often not reproducible and only occurs sporadically.

And of course the coordination and organization of the threads is not available for free.

As soon as we overdo it with multithreading (concurrency), these costs are so high that the potential gain in performance that we achieve through parallelization is negated.

The life story of a thread in JAVA

Now it's best to get yourself a bag of chips, sit down and listen to me. I want to tell you about the life of a thread in Java.

A thread has five phases.

As with all of us, a thread sees the light of day from its birth. We call this phase new.

As soon as he is safe and sound, he is ready to take on work. What a hardworking guy! Not true? This phase is called Runnable.

The moment the thread gets work, it switches to Running Phase in which he tries to complete his tasks as quickly as possible.

Since a thread communicates with other threads, it can happen that it has to wait for a result from a colleague and enters the phase waiting changes. This situation is particularly important in connection with the producer consumer pattern, which we will look at in more detail. As soon as the waiting phase is over, a thread goes back to running mode.

Now put your bag of chips away and get a handkerchief, because the life of a thread also ends with death (Dead).As soon as a thread dies, it is no longer available.

Sequential and parallel execution in comparison

Okay, let's get our fingers dirty and turn a sequential into a parallel executable program.

For this purpose we consider the following example sequential program:

1: class Runner1 {2: public void startRunning () {3: for (int i = 0; i <= 10; i ++) {4: System.out.println ("Runner1" + i); 5:} 6:} 7:} 8: class Runner2 {9: public void startRunning () {10: for (int i = 0; i <= 10; i ++) {11: System.out.println ("Runner2" + i); 12:} 13:} 14:} 15: public class RunnerExample {16: public static void main (String [] args) {17: Runner1 runner1 = new Runner1 (); 18: Runner2 runner2 = new Runner2 (); 19: runner1.startRunning (); 20: runner2.startRunning (); 21:} 22:}

The example program consists of the two classes and, which are structured in the same way.

Both classes contain the method in which a screen output is generated within a for loop, by means of which it can be seen in which class and in which loop iteration the output was generated.

In the method (from line 16) an instance of each class is generated (lines 17 and 18) and the method is called for each instance.

If we run our program, we get the following output:

Runner1 0 Runner1 1 Runner1 2 Runner1 3 Runner1 4 Runner1 5 Runner1 6 Runner1 7 Runner1 8 Runner1 9 Runner1 10 Runner2 0 Runner2 1 Runner2 2 Runner2 3 Runner2 4 Runner2 5 Runner2 6 Runner2 7 Runner2 8 Runner2 9 Runner2 10

On the basis of this output, we can clearly see the sequential processing.

First, the method is completely processed by and only after this has been completed does the method start.

Let us now take care of the parallelization.

We want to ensure that the method is processed by and at the same time.

The runnable interface

The JAVA means by which we can achieve this goal is the interface that we need to implement in our two runner classes.

We carry out this procedure using the example of. The procedure is exactly the same for the class.

For this purpose we have exactly two things to do.

  1. Integration of the interface in the class.
  2. Overwriting the method from the interface.

After that, we can simply place the code to be executed in parallel inside the method. In our example we simply call the method here.

The rocket technology is done! Here the adapted class.

1: class Runner1 implements Runnable {2: public void startRunning () {3: for (int i = 0; i <= 10; i ++) {4: System.out.println ("Runner1" + i); 5:} 6:} 7: @Override 8: public void run () {9: startRunning (); 10:} 11:}

Now only one thing is missing.

These are threads in which our code is executed.

For this purpose we have to create a thread for each of our runner classes.

We can simply pass the runner that the respective thread should take care of as a constructor argument.

Realized the whole thing looks like this:

Thread t1 = new Thread (new Runner1 ()); Thread t2 = new Thread (new Runner2 ());

Now we can start these threads using the instance method.

t1.start (); t2.start ();

After calling, the code is called from the method in the respective class.

The most important finding here is that the thread is started immediately and without waiting until the execution of is finished.

In other words: the two threads and run in parallel. This fact can also be observed from the screen output of the program.

Runner1 0 Runner2 0 Runner1 1 Runner2 1 Runner1 2 Runner2 2 Runner1 3 Runner2 3 Runner1 4 Runner2 4 Runner1 5 Runner2 5 Runner1 6 Runner2 6 Runner1 7 Runner1 8 Runner1 9 Runner2 7 Runner1 10 Runner2 8 Runner2 9 Runner2 10

As we can see, the screen output is no longer sequential.

Warning: The order in the output may be different for you. Perhaps the output even changes from program execution to program execution. This is exactly the problem of the difficult reproducibility with multithreading (concurrency), already mentioned.

The point in time when certain hardware resources are available to a thread is determined by the operating system and depends on dynamically changing factors, such as other processes to which hardware resources must also be made available.

Threads using inheritance

An alternative to using the interface is to use inheritance.

For example, we can infer the class from the class as follows.

class Runner1 extends Thread {public void startRunning () {for (int i = 0; i <= 10; i ++) {System.out.println ("Runner1" + i); }} @Override public void run () {startRunning (); }}

Exactly as with the procedure with the help of the interface, we overwrite the method and place the code in it, which is to be executed concurrently.

Now we can create an instance and since it inherits from the superclass, the method is immediately available here.

We can therefore start the thread in the following way:

Runner1 runner1 = new Runner1 (); runner1.start ();

The way to implement threads with the help of inheritance leads to more compact code, but has the disadvantage that JAVA does not support multiple inheritance that cannot inherit from another class, which is why the way using the interface is more flexible.

We can also use the interface to define threads as an inner anonymous class.

Waiting for death using the join keyword.

Okay, time for a little quiz. What do you think? What does the output of the following program look like?

1: t1.start (); 2: t2.start (); 3: System.out.println ("Thread terminated!");

Or to put it another way. What issue are we hoping for?

Obviously, our intention is to output the screen output in line three only after threads and have ended.

Okay let's try!

Thread ended! Runner1 0. . . Runner2 10

Oops, that didn't work. Did you expect it?

Naturally! Because that is exactly the purpose of a thread. It should run concurrently and not interrupt the main program.

However, there are situations, such as ours, in which the main program (the so-called main thread) should only continue to run when all threads have been processed.

And it is precisely for this purpose that the method is intended which is used as follows:

1: Thread t1 = new Thread (new Runner1 ()); 2: Thread t2 = new Thread (new Runner2 ()); 3: t1.start (); 4: t2.start (); 5: try {6: t1.join (); 7: t2.join (); 8:} catch (InterruptedException e) {} 9: System.out.println ("Thread terminated!");

The method is an instance method from the class.

The calls to this method in lines six and seven ensure that the screen output in line 9 is not executed until both the thread and have died.

Since the method throws one, we need to embed the calls in an appropriate block.

It is important to understand that the threads and continue to be processed in parallel. This fact can also be seen from the program output:

Runner2 0 Runner1 0 Runner2 1 Runner1 1 Runner2 2 Runner1 2. . . . Runner2 7 Runner1 7 Runner1 8 Runner1 9 Runner1 10 Runner2 8 Runner2 9 Runner2 10 Thread ended!

Great, exactly what we wanted!

The keyword volatile

We have already stated that different threads are working on the same memory.

But that is only the half truth!

Different threads can run on different CPUs and each CPU has a buffer, the so-called cache.

The biggest advantage of the cache is that the CPU can access the cache much faster than the main memory of the computer.

However, the caches of the different CPUs are not synchronized with each other and that can lead to our threads working with different data.

For example, let's assume we have a Boolean value that we initialize with.

In order to be able to work with this value as effectively as possible, there is a copy of this variable in each cache of our CPUs.

If the value of this variable is now changed from to in the first thread, this only happens in the copy of this variable that is in the cache of the first CPU.

In the cache of the second CPU, however, the variable still contains the value. So we created an inconsistency.

And to prevent exactly that, there is the keyword in JAVA.

A variable defined with is not buffered in the cache of the CPU but only exists ONCE centrally in the main memory.

All threads using the variable then access the same location in memory. Since the variable only exists once, inconsistencies can no longer occur.

Okay, let's look at how to use.

private volatile boolean switch = false;

Yep, that's all! We only have to write in front of the variable name in the variable declaration.

Since a declared variable is not cached, the use of slightly reduces the performance of our program. To do this, however, we avoid the consistency problem described above.

What is a deadlock and what is a livelock?

Are you a craftsman Do you have a tool case?

No! Okay, you don't have to be a craftsman either. But you should at least be able to imagine a tool case.

The most important thing here is that your tool case contains a hammer. And exactly one!

Now imagine that you have just moved into a new apartment and that there is still a lot of renovation work to be done. In particular, there are still a lot of nails to be hammered into the wall.

You're popular and have a bunch of threads! Um, I mean to say friends who will help you with this. But you only have one hammer.

Your very best friend is Moritz. He really wants to help you. But whenever he wants to drive a nail into the wall, someone else's hammer is in the hedge and Moritz cannot work.

So every time he waits for a resource to be released (hammer). This is exactly what can happen to a thread.

The situation in which a thread is waiting for a resource to be released is called a deadlock.

In addition to the deadlock, there is also the so-called livelock.

In contrast to the deadlock, in which a thread remains in the waiting state, a thread is permanently busy with a livelock and never gets finished.

This situation occurs when your paddle boat has a hole and you avoid sinking by scooping the penetrating water out of the bottom with a bucket.

As long as you don't mend the boat you will never finish this task and you will be in a livelock.

Let's look at an example in which two threads are fighting for the same resource. For this, however, we first have to clarify the term atomic operation.

Atomic operations

You are probably familiar with the term atom from physics lessons and know that this term describes an elementary particle that cannot be further decomposed.

And the term atomic is to be interpreted very similarly in computer science.

We speak of an atomic operation when it can be processed in one step. Examples of atomic operations are read and write access.

But even a simple increment is not atomic (see also). Instead, it is split internally into the following atomic operation.

  1. The variable i is placed on the execution stack.
  2. The value 1 is placed on the execution stack.
  3. The value i and 1 is added.
  4. The result is written into memory.

What does this have to do with multithreading?

Well, what if two different threads want to increment the variable?

Then we have to be extremely careful that the threads do not spark between each other while the atomic operations are being processed.

Let us assume, for example, that thread 2 gets the value of the variable after it has already been increased by thread 1 but has not yet been written back.

In this case, Thread works with a value of 1 that is too small.

To prevent this there is the keyword.

Both methods and code blocks can be declared with. However, we limit ourselves to the use of at method level.

A method that a thread-safe increment performs looks like this.

1: private static synchronized void increment () {2: ++ count; 3:}

The body of this method can only be executed by a single thread at the same time, in particular not in parallel.

Since this undermines the purpose of threads a bit, we should use caution. Otherwise we will quickly lose the performance gained through the use of multithreading.

If the situation often arises that threads come to a standstill because a resource is occupied, we should check the design of our multithreading application and change it if necessary.

The keywords wait and notify!

Okay, so far that was a lot of stuff that needs to be digested first.

But there is one more thing we have to look at. Namely the function of the keywords and.

These keywords are the basis of the important producer consumer pattern.

This pattern involves at least two threads that communicate with one another.

One (at least) thread is the so-called producer and (at least) one other thread is the consumer.

The name says it all! The producer is responsible for producing data that the consumer processes.

As a rule, the results of the producer are first temporarily stored in a queue.

Unfortunately, a detailed examination of the Consumer Producers Pattern is beyond the scope of this article.

The only important thing for us here is that the producer and consumer have to communicate with each other.

The consumer has to wait until the producer has produced enough data that can be processed.

As soon as enough data has been produced, the consumer must be notified that he can now start his work.

So that the consumer is not overwhelmed, the producer should only work again as soon as the consumer has completely processed the data in the queue.

The producer must therefore be put into the wait state for which the keyword exists.

We use the keyword to implement the notification between the threads.

With we cancels the waiting state of a thread again.

Okay, let's see this in action.

In the first, the queue is empty. To fill it, the producer thread works until it is full.

As soon as the queue is full, the consumer is informed about this with the help of and begins to process the queue. During this time, the producer assumes the status.

If the queue is empty, the producer again receives an and begins to work. The consumer falls back into wait mode.

Conclusion: The aim of this article was to give you a good overview of the topic of concurrency and parallel programming. In particular, we looked at how threads work in Java.

Did you like the article? Then follow us on Facebook right away.