Week 28
Four weeks into class, we've learned a lot. Chapters 18-22 covered fundamental concepts of memory virtualization in operating systems. Mainly focusing on how the operating system creates the illusion of private, contiguous memory for each process while efficiently managing physical resources. We first began with basic address translation using tables, which map virtual addresses to physical frames, allowing processes to operate independently of actual memory location. Introducing challenges such as page table size. Leading to multi-level page tables that reduce memory overhead by only storing valid mappings.
Then, we examined mechanisms like Translation Lookaside Buffer (TLB) to accelerate address translation by caching frequent mappings. A key theme was page replacement policies like LRU and FIFO. I wasn't familiar with LRU, but I was familiar with FIFO back in my old community college data structures and algorithms course. These algorithms determine how the operating system handles memory pressure by selecting pages to evict when the physical memory is full. These policies all have their pros and cons, with different situations being better than others.
The chapters also covered swap space, which extends memory capacity by paging to disk, and discussed thrashing scenarios where excessive paging degrades performance. Throughought, we saw how the operating system maintains transparency, hiding fragmentation, relocation, and disk interactions from processes. Whilst ensuring isolation and security through hardware-assissted address translation. These techniques enable systems to run multiple processes efficiently, providing each with the illusion of dedicated memory while optimizing physical resource usage.
Trade-offs between performance, overhead, and complexity were a recurring theme, showing how modern operating systems have to balance these factors to deliver memory management.
Comments
Post a Comment