Lecture 6

last time - discussed how threads accessing the same memory location can cause race conditions ("data race")

processes can have similar issues, with shared memory - block of memory allocated by OS for processes to share. incorrect use of this shared memory can cause race conditions

processes and threads can also both have race conditions with shared resources like i/o devices

last time, also discussed technique for mitigating race conditions - lock/mutex

analogy for mutex lock: bathroom lock. enter bathroom and lock, perform task, unlock and next thread can access.

how does the special object of the mutex class work?

// example of bad code
// busy loop until lock opens
while (!unlocked) {}
// "acquire" lock
// however, won't work as desired because
// two threads could read unlocked to be True
// at the same time, and simultaneously modify
// `glob`
unlocked = false;
glob++;
unlocked = true;

example atomic function: test_and_set(&lock):

thread-safe terminology is not just for mutexes and synchronization tools, but also for regular functions

question: why not make everything thread-safe?

(note: unlock doesn't need to be thread-safe, just lock)

our previous example with m.lock() is not efficient:

...

int glob{ 0 };
std::mutex m;

void Foo(std::string name) {
    m.lock()
    for (int i{0}; i < 100'000; i++) {
        glob++;
    }
    m.unlock()
}

...

reminder: always need to remember to unlock a shared resource

RAII: "resource allocation is initialization"

example with lock_guard:

...

int glob{ 0 };
std::mutex m;

void Foo(std::string name) {
    std::lock_guard<std::mutex> m_guard{ m };
    for (int i{0}; i < 100'000; i++) {
        glob++;
    }
    // no need to unlock - only locked in function scope
}

...

currently, mutex is our synchronization tool - allows us to avoid race conditions by making sure a shared resouce is used by only one entity/thread/process at a time

however, there are other tools as well:

many different options because scenarios may require different approaches

for the purposes of our course, we will just stick with mutexes, unless we need more tools

deadlock

sample piece of code:

// each variable has mutex associated with it
int g1, g2;
std::mutex m1, m2;

void t1_func() {
    m1.lock();
    m2.lock();
    ...
    m1.unlock();
    m2.unlock();
}

void t2_func() {
    m2.lock();
    m1.lock();
    ...
    m2.unlock();
    m1.lock();
}

the above threads are individually well-written, will not cause race conditions, but have another hazard: deadlock: