Paging

Divide both virtual and physical memory up into small fixed-size pieces

For each process, map its virtual pages to physical pages

Trade-offs

Implementation

Making Paging Fast

On each memory reference

  1. Check TLB, if entry present get physical address faster
  2. If not, walk to page tables, and then insert the translation into the TLB
Example

Different kinds of paging

  • Demand paging
    • Only load pages from the executable file, if they are needed
  • Copy-on-write
    • Share a reference rather than a copy
    • Make copy when write happen

Working Set Model

Restarting after page fault

Hardware provides the kernel with information about the page fault

What to fetch?

At least need to bring in page that caused page fault

Pre-fetch surrounding pages?

  • Reading two disk blocks approximately as fast as reading one
  • Big win if the application exhibits spatial locality

Zero out unused pages in idle loop

Need 0-filled pages for stack, heap, etc.

  • Zeroing on demand is slower
Two-level TLBs

  • Better than going to main memory
  • Second level is larger than first level
    • Still want to maximize hit rate in L1 TLB

Superpages

Eviction Policies

Naïve Paging

Page buffering

Reduce number of I/Os on the critical path
Keep a pool of free page frames

Page allocation

Can be global or local

Thrashing