Python Mastery: Complete Beginner to Professional
HomeInsightsCoursesPythonMultiprocessing for CPU bound

Multiprocessing: True Parallelism

Bypassing the GIL to unleash the full power of Multi-Core CPUs.

1. The Big Idea (ELI5)

👶 Explain Like I'm 10: Multiple Kitchens

In Threading, we had 4 chefs fighting over 1 Stove.Multiprocessing builds 4 separate Kitchens (Processes).

  • The Power: Each kitchen has its own Stove (CPU Core). They cook completely independently.
  • The Cost: Building a kitchen is expensive (Slow startup). Also, since they are in different rooms, Chef A can't easily hand a plate to Chef B (Memory is not shared). They need a special window (Queue/Pipe) to pass things.

2. Syntax: Creating Processes

It looks similar to Threading, but under the hood, it's cloning the entire Python interpreter.

PYTHON
from multiprocessing import Process

def heavy_calculation(name):
    # This runs on a separate CPU Core!
    total = sum(i * i for i in range(10_000_000))
    print(f"{name} finished: {total}")

if __name__ == "__main__":
    # ⚠️ CRITICAL: This guard protects against infinite recursion on Windows
    
    p1 = Process(target=heavy_calculation, args=("Core-1",))
    p2 = Process(target=heavy_calculation, args=("Core-2",))
    
    p1.start()
    p2.start()
    p1.join()
    p2.join()

Windows User Alert: You MUST wrap your process creation code in `if __name__ == "__main__":`. If you don't, your computer might crash because Windows re-imports the script for every new process.

3. Communication: Queues and Pipes

Processes don't share global variables. To send data back, use a `Queue`.

PYTHON
from multiprocessing import Process, Queue

def worker(q):
    q.put("Result from Process")

if __name__ == "__main__":
    q = Queue()
    p = Process(target=worker, args=(q,))
    p.start()
    
    print(q.get()) # Reads "Result from Process"
    p.join()

4. Shared Memory (Advanced)

Copying data via Queues is slow for huge arrays (e.g., 4K Video Frames). You can cheat the distinct-memory rule using `Value` (for numbers) or `Array` (for lists).

PYTHON
from multiprocessing import Process, Value

def increment(counter):
    with counter.get_lock(): # Yes, we still need locks!
        counter.value += 1

if __name__ == "__main__":
    counter = Value('i', 0) # 'i' = Integer
    
    procs = [Process(target=increment, args=(counter,)) for _ in range(10)]
    
    for p in procs: p.start()
    for p in procs: p.join()
    
    print(counter.value) # 10

5. Modern Pattern: ProcessPoolExecutor

Just like Threading, don't use raw `Process` classes unless you have to. Use the Executor to map a function over a large dataset.

PYTHON
from concurrent.futures import ProcessPoolExecutor
import os

def waste_cpu(x):
    # Returns PID so we can prove it's different processes
    return f"Input {x} processed by PID {os.getpid()}"

if __name__ == "__main__":
    # Uses all available CPU cores by default
    with ProcessPoolExecutor() as executor:
        results = executor.map(waste_cpu, range(5))
        
    for res in results:
        print(res)

What's Next?

You have conquered the holy trinity of Concurrency: Threading, Asyncio, and Multiprocessing. Next, generally we would move to Testing & Debugging to ensure all this complex code is actually correct.