Journal 26
July 16 - July 22
Hi, journal. It is week 4 already, which means we are at the halfway mark. I am testing out a new font (for readability), and I think I like it. To be completely honest, this class is kind of kicking my butt. I have spent more time studying and doing assignments in this class than I have in any class before, and I still feel as though some of the assignments and concepts are difficult to grasp. I find solace in that many people online share the sentiment that Operating Systems is one of the most difficult classes they have taken. I am not used to dedicating so much time to a course just to squeak by on a passing grade. Perhaps for the next 4 weeks I shall switch up some of my study habits and see if something else works better. I am optimistic. Anyways, here is my summary of what I learned this week is CST 334:
Chapter 18 of our online textbook focused on an introduction to paging. Specifically, it delved into how to minimize and eliminate fragmentation issues through the use of fixed-size pages. Virtual memory is divided into pages, while physical memory is divided into page frames. Page tables serve to map virtual pages (Virtual Page Number) to physical frames (Physical Frame Number) and each process has its own page table. Virtual addresses are composed of the Virtual Page Number and the offset (which remains unchanged during translation). Our labs and quizzes focused a lot on paging and made it quite easy to do. VA = VPN + offset, PA = PFN + offset
Chapter 19 delved further into paging, specifically for faster translations. The Translation Lookaside Buffer (TLB) is a hardware cache that stores recent translations from Virtual Page Numbers for Physical Frame Numbers. On a TLB hit, address translation is fast, but on a TLB miss, the translation must be fetched from the page table, which is not as quick. Some TLBs are managed by hardware, wherein the CPU executes the table walk, while others are managed by software, where the OS handles TLB misses. TLB performance is boosted by spatial and temporal locality as nearby memory accesses hit same page and recently used pages have a higher likelihood of being reused.
Chapter 20 focused on explaining smaller page tables because linear page tables are immense. Bigger pages allow for fewer page entries (PTE), but could cause internal fragmentation as memory may be wasted if unused. Hybrid paging and segmentation use multiple page tables per segment, thus reducing memory waste from invalid pages, but also may cause some external fragmentation since the page tables vary in size. Multi-level page tables are formatted in a tree structure and skip unused page table regions, and utilizes a directory in order to reference second-level page tables. This saves memory by avoiding allocation for unused address ranges, but often means extra memory lookups.
Chapter 21 spoke on swap space (on-disk space for paging memory in and out). The present bit in PTE reveals if a page is in memory, and if it is not present a page fault occurs and the OS loads the page from disk into memory. The page-fault handler allocates memory for the missing page, as well as updating the PTE and TLB. Chapter 22 explored what should happen when memory is full. Memory becomes a cache for virtual memory, and we want to minimize faults and maximize hit rate. Replacement policies such as MIN, FIFO, Random, and LRU help to decide which pages are evicted. Average memory access time (AMAT) is memory time = (miss rate * disk time)
Comments
Post a Comment