Posts

Week 32

 Wow... It really is the last week of CST 334. I vividly remember the week when we went through and downloaded Docker. Using PowerShell and terminal seemed like something straight out of a movie, but learning low-level stuff like that is key to success as a computer scientist. The PA codes were also a hurdle to get accustomed to. Coding tasks are always daunting, but doing it in C was even more of a pain. But once again, languages where you're in charge of memory allocation are very important and equally important to learn. Over the course, we learned about processes and how you manage them. We had a lot of emphasis on memory virtualization, and then we had the midterm. Then, concurrency and persistence. And as I'm writing this, I'm preparing for the upcoming final exam. As for the final exam, I will try to be more cognizant of the time. On the previous midterm, I had been surprised by the time, as I thought I had more. I did all the big questions to realize I only had 12 m...

Week 31

 This week has been hectic as I am in a totally different continent, fifteen hours ahead of my normal time zone. But I packed accordingly so I’ve been able to keep up on lecture readings and notes. From this weeks reading, I have learned that a file system is a crucial component of an operating system, responsible for organizing and managing data from a disk. The very simple file system serves as a foundational model to understand core concepts. File systems utilize inodes, or index nodes to store metadata about files, such as permissions, size, and pointers to data blocks. Inodes can employ multi-level indexing, including direct, indirect, and double/triple indirect pointers, to affinity handle files or varying sizes. Directories are structured as special files containing mappings of human-readable names to inode numbers, allowing the system to locate files through path traversal. Free space is managed using bitmaps (for inodes and data blocks), ensuring efficient allocation and d...

Week 30

       This week's material deepened my understanding of concurrency control and how it can get out of control. Semaphores emerged as a synchronization tool, functioning as both locks and condition variables. Their atomic operations, sem_wait() and sem_post(), enable thread coordination, enforcing mutual exclusion in critical sections or guaranteeing execution order. Improper usage risks deadlocks, as seen in the dining philosophers problem, where circular wait conditions arise unless lock acquisition is carefully ordered.     The readings from this week also highlighted non-deadlock concurrency bugs, which happen a lot in real-world systems. Atomicity violations, like a thread interrupted between checking and using a pointer-expose flawed assumptions about uninterrupted execution. Order violations, where threads rely on unchecked execution sequences, further underscore the need for synchronization primitives to enforce correctness. Deadlocks remain insidio...

Week 29

       Week 29 has been a crazy week trying to keep up on my readings/lectures and doing midterm review. I think I'm doing okay but I would like to brush up on a few topics before the weekend.     In chapters 26-29 of OSTEP, I learned how operating systems manage concurrency through threads, locks, and synchronization primitives. The reading started by introducing threads as independent execution units within a process that share the same address space, enabling parallelism but requiring careful coordination to avoid race conditions which are a big no no. The key challenge was ensuring mutual exclusion in critical sections, where shared resources are accessed. Simple solutions like disabling interrupts work only on single-processor systems,w hile atomic hardware instructions enable efficient spinlocks for multicore systems. However, we learned that spinning wastes CPU cycles, so OS- supported sleep/wake mechanisms like yield() are used to block threads unti...

Week 28

      Four weeks into class, we've learned a lot. Chapters 18-22 covered fundamental concepts of memory virtualization in operating systems. Mainly focusing on how the operating system creates the illusion of private, contiguous memory for each process while efficiently managing physical resources. We first began with basic address translation using tables, which map virtual addresses to physical frames, allowing processes to operate independently of actual memory location. Introducing challenges such as page table size. Leading to multi-level page tables that reduce memory overhead by only storing valid mappings.      Then, we examined mechanisms like Translation Lookaside Buffer (TLB) to accelerate address translation by caching frequent mappings. A key theme was page replacement policies like LRU and FIFO. I wasn't familiar with LRU, but I was familiar with FIFO back in my old community college data structures and algorithms course. These algorithms dete...

Week 27

 Write a 1 - 2 paragraph journal post, of at least 250 words, on what you learned this week in CST 334.      This week in CST334, I deepened my understanding of memory virtualization, focusing on segmentation as a key mechanism to optimize memory usage. Segmentation divides a process’s address space into logical segments (code, heap, stack), each with its own base and bounds registers. Unlike the simpler base-and-bounds approach, which allocates the entire address space contiguously, segmentation allows the OS to allocate only the physically used portions of memory. This significantly reduces internal fragmentation while enabling sparse address spaces, where large virtual memory regions don’t require physical memory unless actively used.      The hardware translation process became clearer. Virtual addresses are split into a segment selector (top bits) and an offset (remaining bits). The hardware combines the segment’s base register with the offset to ...

Week 26

 Write a 1 - 2 paragraph journal post, of at least 250 words, of what you learned this week in CST 334.     There was a lot to learn this week. Especially with all the lectures and readings assigned.  The first thing I learned about was CPU Scheduling Algorithms, intending to optimize turnaround time and response time. I learned about policies like FIFO, SJF, STCF, RR, and MLFQ. FIFO is a term I first learned in my Data Structures and Algorithms course, in which the first one in is the first one out. It is simple, but long jobs end up delaying short ones. Shortest job first is optimal for turnaround time, but requires knowing job lengths. Shortest time to completion first is better for dynamic arrivals, where we don't know what's coming next. Round robin is fair and slices response time, but worsens turnaround time. MLFQ uses priority queues and feedback to classify jobs. The rules include: Higher priority runs first, equal priority > Round Robin, new jobs start a...