Reactive Programming in Java: How, Why, and Is It Worth Doing? Multithreading
Why has reacting programming become so popular? At a certain point, CPU speeds stopped growing, which meant that developers could no longer count on increasing their programs' speed: they had to be parallelized.
The picture shows that CPU frequency was growing in the 1990s and increased sharply in the early 2000s. As it turned out, this was the limit.
Why did the frequency growth stop?
Microchip transistor sizes reached their minimum. The PN-junction there became as thin as it was physically possible. As we can see in the microchip transistor sizes graph, the size is getting smaller and smaller while the number of transistors per chip is growing.
Such scaling had resulted in growing frequency. As electrons flow at nearly the speed of light, it takes less time for electrons to run all the way inside the processor due to the smaller distance. Yet, the technology reached its physical limit. Something else had to be conceived.
Multithreading
And we already know that thing: multi-core processing units. Instead of relying on growing CPU performance, it was decided to increase their number. But effective utilization of multiple processors requires multithreading.
The theme of multithreading is very complex yet inevitable in our world. A typical computer today has 4 or more cores and multiple threads. A modern, powerful server may have up to 100 cores. If you don’t employ multithreading in your code, you don’t get any benefits. Therefore, all global industries are moving on to use these capabilities.
In this way, dangers are lurking. It’s tough to code because of multithreading: synchronization, races, laborious debugging, etc. have caused a lot of trouble to programmers. Besides, the costs of such development are getting higher.
In Java, multithreading appeared a long time ago; it has been used since the very first version.
It looks like this:
Writing code for a large system using multithreading primitives is a hard job, to put it mildly. No one does it like that nowadays. It’s the same thing as coding in Assembler.
In many cases, the effect brought by multithreading can lower performance, not improve it.
What should be done then?
In most situations, parallel programming can be replaced by asynchronous programming. Look at the illustration above. In the left picture, the kid is willing to help his mother. The boy takes the laundry from the washer, gives it to mom, and she puts it in the basket. The program works like that in two threads: mother thread and kid thread. Theoretically, performance must increase in that case: two individuals are better than one, and we engaged two cores. But imagine a real-life situation where the mother gives the laundry to the kid and waits for him to put it into the washer. Or the kid waits for the laundry from his mother. In fact, they impede each other. And some time is needed for passing the laundry. The mother would do everything quicker alone.
Similar things happen inside a computer. Therefore, dealing with parallelism is not so easy as it may seem. All synchronizations between execution threads actually take a lot of time.
In the right picture, we see a young man who has bought an automatic washer. While it is washing, he can read a book. Here the young man enjoys the advantage of doing what he likes without bothering about the laundry. Once it’s complete, he will hear a signal and react to it. There is parallelism but no synchronization. That means no time is wasted on synchronization, an evident benefit.
This is an asynchronous approach. There is a separate executor to which we assigned our own task, not a part of it. In the left picture, the mother and kid are doing a common task, while in the right picture, the washer and the young man are each doing their tasks. At a particular moment, they will get together – when the washer finishes washing, and the young man puts away his book. But for an hour and a half, while the washer was doing its job, he was pretty happy, reading and not caring about the laundry.