Posts

Showing posts from July, 2025

Week 30

       This week's material deepened my understanding of concurrency control and how it can get out of control. Semaphores emerged as a synchronization tool, functioning as both locks and condition variables. Their atomic operations, sem_wait() and sem_post(), enable thread coordination, enforcing mutual exclusion in critical sections or guaranteeing execution order. Improper usage risks deadlocks, as seen in the dining philosophers problem, where circular wait conditions arise unless lock acquisition is carefully ordered.     The readings from this week also highlighted non-deadlock concurrency bugs, which happen a lot in real-world systems. Atomicity violations, like a thread interrupted between checking and using a pointer-expose flawed assumptions about uninterrupted execution. Order violations, where threads rely on unchecked execution sequences, further underscore the need for synchronization primitives to enforce correctness. Deadlocks remain insidio...

Week 29

       Week 29 has been a crazy week trying to keep up on my readings/lectures and doing midterm review. I think I'm doing okay but I would like to brush up on a few topics before the weekend.     In chapters 26-29 of OSTEP, I learned how operating systems manage concurrency through threads, locks, and synchronization primitives. The reading started by introducing threads as independent execution units within a process that share the same address space, enabling parallelism but requiring careful coordination to avoid race conditions which are a big no no. The key challenge was ensuring mutual exclusion in critical sections, where shared resources are accessed. Simple solutions like disabling interrupts work only on single-processor systems,w hile atomic hardware instructions enable efficient spinlocks for multicore systems. However, we learned that spinning wastes CPU cycles, so OS- supported sleep/wake mechanisms like yield() are used to block threads unti...

Week 28

      Four weeks into class, we've learned a lot. Chapters 18-22 covered fundamental concepts of memory virtualization in operating systems. Mainly focusing on how the operating system creates the illusion of private, contiguous memory for each process while efficiently managing physical resources. We first began with basic address translation using tables, which map virtual addresses to physical frames, allowing processes to operate independently of actual memory location. Introducing challenges such as page table size. Leading to multi-level page tables that reduce memory overhead by only storing valid mappings.      Then, we examined mechanisms like Translation Lookaside Buffer (TLB) to accelerate address translation by caching frequent mappings. A key theme was page replacement policies like LRU and FIFO. I wasn't familiar with LRU, but I was familiar with FIFO back in my old community college data structures and algorithms course. These algorithms dete...

Week 27

 Write a 1 - 2 paragraph journal post, of at least 250 words, on what you learned this week in CST 334.      This week in CST334, I deepened my understanding of memory virtualization, focusing on segmentation as a key mechanism to optimize memory usage. Segmentation divides a process’s address space into logical segments (code, heap, stack), each with its own base and bounds registers. Unlike the simpler base-and-bounds approach, which allocates the entire address space contiguously, segmentation allows the OS to allocate only the physically used portions of memory. This significantly reduces internal fragmentation while enabling sparse address spaces, where large virtual memory regions don’t require physical memory unless actively used.      The hardware translation process became clearer. Virtual addresses are split into a segment selector (top bits) and an offset (remaining bits). The hardware combines the segment’s base register with the offset to ...

Week 26

 Write a 1 - 2 paragraph journal post, of at least 250 words, of what you learned this week in CST 334.     There was a lot to learn this week. Especially with all the lectures and readings assigned.  The first thing I learned about was CPU Scheduling Algorithms, intending to optimize turnaround time and response time. I learned about policies like FIFO, SJF, STCF, RR, and MLFQ. FIFO is a term I first learned in my Data Structures and Algorithms course, in which the first one in is the first one out. It is simple, but long jobs end up delaying short ones. Shortest job first is optimal for turnaround time, but requires knowing job lengths. Shortest time to completion first is better for dynamic arrivals, where we don't know what's coming next. Round robin is fair and slices response time, but worsens turnaround time. MLFQ uses priority queues and feedback to classify jobs. The rules include: Higher priority runs first, equal priority > Round Robin, new jobs start a...