Concurrent (Parallel) Programming: Difference between revisions

From NovaOrdis Knowledge Base
Jump to navigation Jump to search
Line 9: Line 9:
Concurrent programming (or concurrency) is the management of multiple tasks that exist, and possibly execute, at the same time. Concurrent programming includes management of task execution, communication between tasks and [[#Synchronization|synchronization]] between tasks.
Concurrent programming (or concurrency) is the management of multiple tasks that exist, and possibly execute, at the same time. Concurrent programming includes management of task execution, communication between tasks and [[#Synchronization|synchronization]] between tasks.


Explicitly addressing concurrency issues is software is becoming more important as Moore Law does not hold anymore. For a while, the number of transistors on the chip doubled every two years. That brought smaller transistor size, and less power consumption. Smaller transistors switch faster. As density increases, you get natural speedup. Exponential increase in density brought exponential increase in speed. The dynamic power consumption P is given by α * C * F * V<sup>2</sup>, where α is the percentage of time switching, C is capacitance (goes down as the transistor shrinks), F is clock frequency and V is voltage differential. With smaller transistors, for same power consumption C and V are smaller, so F can go higher. The fact that the voltage scales with transistor size is called Dennard Scaling. However, transistors cannot get smaller under a certain threshold: noise, power leakage, etc, so the frequency cannot get higher. If the frequency goes higher, the power consumption will go beyond the capacity of the chip to dissipate power, and it will melt. This problem is known as the "power wall". The second factor that works against increasing the frequency as a source of overall speedup of the system is the Von Neumann bottleneck: the main memory access is orders of magnitude slower than the processor, so even if the processor is fast, it'll idle waiting for memory access. People work around this by onboarding fast caches on processors.
Explicitly addressing concurrency issues is software is becoming more important as Moore Law does not hold anymore. For a while, the number of transistors on the chip doubled every two years. That brought smaller transistor size, and less power consumption. Smaller transistors switch faster. As density increases, you get natural speedup. Exponential increase in density brought exponential increase in speed. However, this scaling down came with a power and temperature problem. The dynamic power consumption P is given by α * C * F * V<sup>2</sup>, where α is the percentage of time switching, C is capacitance (goes down as the transistor shrinks), F is clock frequency and V is voltage differential. With smaller transistors, for same power consumption C and V are smaller, so F can go higher. The fact that the voltage scales with transistor size is called Dennard Scaling. However, transistors cannot get smaller under a certain threshold: noise, power leakage, etc, so the frequency cannot get higher. If the frequency goes higher, the power consumption will go beyond the capacity of the chip to dissipate power, and it will melt. This problem is known as the "power wall". The second factor that works against increasing the frequency as a source of overall speedup of the system is the Von Neumann bottleneck: the main memory access is orders of magnitude slower than the processor, so even if the processor is fast, it'll idle waiting for memory access. People work around this by onboarding fast caches on processors.


The hardware design works around the incapacity to scale down transistors anymore by packing more cores on the chip. A core is essentially sequential, so in order to use the system effectively, concurrent programming is necessary.
The hardware design works around the incapacity to scale down transistors anymore by packing more cores on the chip. A core is essentially sequential, so in order to use the system effectively, concurrent programming is necessary.

Revision as of 18:10, 31 August 2023

Internal

Overview

Concurrency and parallelism refer to slightly different things, but we discuss about concurrent (parallel) programming just because we some time search for "concurrent" and other times search for "parallel".

Concurrent programming (or concurrency) is the management of multiple tasks that exist, and possibly execute, at the same time. Concurrent programming includes management of task execution, communication between tasks and synchronization between tasks.

Explicitly addressing concurrency issues is software is becoming more important as Moore Law does not hold anymore. For a while, the number of transistors on the chip doubled every two years. That brought smaller transistor size, and less power consumption. Smaller transistors switch faster. As density increases, you get natural speedup. Exponential increase in density brought exponential increase in speed. However, this scaling down came with a power and temperature problem. The dynamic power consumption P is given by α * C * F * V2, where α is the percentage of time switching, C is capacitance (goes down as the transistor shrinks), F is clock frequency and V is voltage differential. With smaller transistors, for same power consumption C and V are smaller, so F can go higher. The fact that the voltage scales with transistor size is called Dennard Scaling. However, transistors cannot get smaller under a certain threshold: noise, power leakage, etc, so the frequency cannot get higher. If the frequency goes higher, the power consumption will go beyond the capacity of the chip to dissipate power, and it will melt. This problem is known as the "power wall". The second factor that works against increasing the frequency as a source of overall speedup of the system is the Von Neumann bottleneck: the main memory access is orders of magnitude slower than the processor, so even if the processor is fast, it'll idle waiting for memory access. People work around this by onboarding fast caches on processors.

The hardware design works around the incapacity to scale down transistors anymore by packing more cores on the chip. A core is essentially sequential, so in order to use the system effectively, concurrent programming is necessary.

Implementations in programming languages:

Synchronization

A way to address concurrent updates of shared state in concurrent programming is to use locks. Locks are problematic because they rely of the capacity of the programer to identify problem areas that require concurrent access protection. As the complexity of the program increases, the reliance of this capacity may become problematic.

An alternative approach is to use an actor model.